hashicorp / terraform

Terraform enables you to safely and predictably create, change, and improve infrastructure. It is a source-available tool that codifies APIs into declarative configuration files that can be shared amongst team members, treated as code, edited, reviewed, and versioned.
https://www.terraform.io/
Other
42.6k stars 9.54k forks source link

provider configurations must not depend on resources during import #17847

Closed Flowman closed 3 years ago

Flowman commented 6 years ago

Terraform Version

Terraform v0.11.7
+ provider.aws v1.14.1
+ provider.external v1.0.0
+ provider.null v1.0.0
+ provider.vault v1.1.0

Terraform Configuration Files

data "vault_aws_access_credentials" "vault" {
  backend = "aws"
  role    = "terraform"
}

provider "aws" {
  access_key = "${data.vault_aws_access_credentials.vault.access_key}"
  secret_key = "${data.vault_aws_access_credentials.vault.secret_key}"
  region     = "ap-southeast-2"
}

resource "aws_organizations_account" "this" {
  name  = "example"
  email = "example@example.com"
}

Expected Behavior

Resource should be imported

Actual Behavior

terraform import aws_organizations_account.this 111111111

Error: Provider "aws" depends on non-var "data.vault_aws_access_credentials.vault.0/data.vault_aws_access_credentials.vault.N". Providers for import can currently only depend on variables or must be hardcoded. You can stop import from loading configurations by specifying -config="".

jbardin commented 6 years ago

Hi @Flowman,

Sorry you're having an issue with this. Import configuration is a little more restrictive than what can be evaluated during plan and apply.

This is a known limitation of import, and is documented in the import provider configuration section on the website.

Flowman commented 6 years ago

Understand that but in this case where you want to use vault with terraform it become a show stopper.

kylescottmcgill commented 6 years ago

I might not be understanding this correctly, but following the documentation as well as the tips here i get the following:

 terraform » terraform import google_container_cluster.<resource_name> <project>/australia-southeast1-a/<resource_name>

Acquiring state lock. This may take a few moments...

Error: Provider "google" depends on non-var "local.workspace". 
Providers for import can currently only depend on variables or must 
be hardcoded. You can stop import from loading configurations 
by specifying `-config=""`.

 terraform » terraform import -config="" google_container_cluster.<resource_name> <project>/australia-southeast1-a/<resource_name>

Error: Import to non-existent module is not defined in the 
configuration. Please add configuration for this module before 
importing into it.

 terraform » terraform import -config="" -var="project=<project>" -var="region=australia-southeast1" -var="zone=australia-southeast1-a" google_container_cluster.<resource_name> <project>/australia-southeast1-a/<resource_name>

Error: Import to non-existent module is not defined in the 
configuration. Please add configuration for this module before 
importing into it.

When I originally saw the error message You can stop import from loading configurations by specifying -config="" my initial thought was to use an actual config, not a blank one.

I admit, we are using workspaces maybe in not the best way with my provider.tf file with the following:

variable "project" {
  type = "string"
}

variable "region" {
  type = "string"
}

variable "zone" {
  type = "string"
}

provider "google" {
  credentials = "${file("~/.config/gcloud/terraform.json")}"

  project = "${local.workspace["project"] ? local.workspace["project"] : var.project}"
  region  = "${local.workspace["region"] ? local.workspace["region"] : var.region}"
  zone    = "${local.workspace["zone"] ? local.workspace["zone"] : var.zone}"
}

Is there anyway around this, I seem to be going in circles a bit.

Nevermind, we are just hardcoding our provider block and we are getting better results.

stszap commented 6 years ago

Also encountered this today. I have 2 providers: aws (static parameters) and kubernetes (depends on aws_ekc_cluster resource) and tried to import an aws_route53_zone resource. Obviously kubernetes provider has nothing to do with route53, but terrafrom still threw Provider "kubernetes" depends on non-var... error. Only after commenting out kubernetes provider I was able to import the resource. It seems that import doesn't work if ANY provider has dynamic parameters.

BouchaaraAdil commented 5 years ago

the same here trying to import role ..


Error: Provider "cloudflare" depends on non-var "data.terraform_remote_state.aws_us_east1_cloudflare.0/data.terraform_remote_state.aws_us_east1_cloudflare.N". Providers for import can currently
only depend on variables or must be hardcoded. You can stop import
from loading configurations by specifying `-config=""`.
wijowa commented 5 years ago

What is the intended way around having to hardcode provider blocks? We "generate" some keys in the provider blocks that cause this issue. These are generated at runtime. Should we be generating a variable file in advance based on a template instead?

andrejvanderzee commented 5 years ago

Running into the same problems with the following config:

locals {

  workspaces = {

    team1 = {
      env       = "team1"
      account_id = "abc"
    }

    team2 = {
      env = "team2"
      account_id  = "xyz"
    }
  }

  project = "environment"
  env = "${lookup(local.workspaces[terraform.workspace], "env")}"
  account_id = "${lookup(local.workspaces[terraform.workspace], "account_id")}"
}

provider "aws" {

  allowed_account_ids  =  ["${local.account_id}"]

  assume_role {
    role_arn             = "arn:aws:iam::${local.account_id}:role/sre"
  }
}

Any plans on fixing this?

CholtonATX commented 5 years ago

We are using Versent/saml2aws to authenticate to JumpCloud and are running into this same issue. Any traction on this issue?

woz5999 commented 5 years ago

This is especially frustrating when the error is being thrown for providers that are completely unrelated to the resource you're trying to import. It shouldn't matter that my helm provider is using configs from variables when I'm trying to import an AWS resource.

RTodorov commented 5 years ago

Also facing this issue, no plans to fix it? @jbardin

jbardin commented 5 years ago

Hi,

Sorry about the delay here. This issue is something we are aware of, and will be addressing when possible. Once we have 0.12 complete, I don't think it will be too difficult to get these evaluations to work.

tomasaschan commented 5 years ago

@jbardin Looking forward to seeing this implemented! Being able to use providers pointing to terraform-provisioned resources is really powerful, so it's sad that it's not fully supported everywhere.

vdamianchicon commented 5 years ago

also running into this issue in gcp importing firewall_rules Terraform v0.11.10

Error: Provider "kubernetes" depends on non-var "data.google_client_config.current.0/data.google_client_config.current.N". Providers for import can currently only depend on variables or must be hardcoded. You can stop import from loading configurations by specifying -config="".

ekarlso commented 5 years ago

Any news ?

arocki7 commented 5 years ago

I have got the same issue with AWS.

SpComb commented 5 years ago

This is surprising because it break imports for all resources, not just those related to the provider with non-var configurations.

Workaround is to manually import the resource by hacking your tfstate... you can either copy-paste the tfstate object from a similar resource and edit the id/attrs to match, or:

arocki7 commented 5 years ago

I did the following workaround to overcome this issue.

provider "aws" {
  region = "eu-west-1"
}

resource "aws_elb" "clb" {
}

terraform import aws_elb.clb internal-clb-prod

akamalov commented 5 years ago

@jbardin pardon my ignorance, but is it my understanding that if my cloud credentials are stored in Vault, I will not be able to import existing resources due to terraform import being a lot restrictive then terraform plan/apply ?

I am using the following to retrieve secrets stored in Hashicorp Vault:

###################################################################
# Retrieve Secrets From HashiCorp's Vault
###################################################################

provider "vault" {
  address = "${var.vault_addr}"
}

data "vault_generic_secret" "azure_credentials" {
  path = "secret/${var.vault_user}/azure/credentials"
}

provider "azurerm" {
  subscription_id = "${data.vault_generic_secret.azure_credentials.data["subscription_id"]}"
  tenant_id       = "${data.vault_generic_secret.azure_credentials.data["tenant_id"]}"
  client_id       = "${data.vault_generic_secret.azure_credentials.data["client_id"]}"
  client_secret   = "${data.vault_generic_secret.azure_credentials.data["client_secret"]}"
}

...and getting the same type of an error as in this ticket:

Error: Provider "azurerm" depends on non-var "data.vault_generic_secret.azure_credentials.0/data.vault_generic_secret.azure_credentials.N". Providers for import can currently
only depend on variables or must be hardcoded. You can stop import
from loading configurations by specifying `-config=""`.

So, the workaround would be to configure declare credentials for the terraform import part and once all resources imported, reconfigure to use terraform vault provider?

jbardin commented 5 years ago

@akamalov, yes that would be the workaround for now. It's usually recommended to leave the provider un-configured and set the credentials in the environment.

akamalov commented 5 years ago

@jbardin Thank you very much indeed for clarification.

sudoforge commented 5 years ago

Even with a blank state file, the plan and apply subcommands handle using data sources within provider blocks just fine. In other words, the following plan works as expected when executing terraform plan and terraform apply, but fails when executing terraform import:

main.tf

provider "aws" {
  # the aws provider falls back to using credentials stored on the 
  # local machine, ~/.aws/credentials and thus does not need any
  # credentials defined if the user is already using the aws-cli
  region = "us-west-1"
  version = "~> 2.7"
}

data "aws_ssm_parameter" "datadog_api_key" {
  name = "/some/path/to/the/secret"
}

data "aws_ssm_parameter" "datadog_app_key" {
  name = "/some/path/to/the/other/secret"
}

provider "datadog" {
  api_key = "${data.aws_ssm_parameter.datadog_api_key}"
  app_key = "${data.aws_ssm_parameter.datadog_app_key}"
  version = "~> 1.8"
}

This causes many headaches, as in an ideal world, nobody would be creating resources outside of Terraform and we could be storing shared secrets like this in a secure manner (assuming remote state is used and secured appropriately, since the data sources would be stored in state).

Unfortunately, during the process of implementing Terraform throughout a company (before it is "required" and staff members still have access to create things), this is often not the real world case - resources get created ad-hoc by some team member and later imported into Terraform state, which out of necessity requires the use of variables within provider configuration instead of using data sources like the above example.

This means that secrets need to be passed around to everyone running Terraform to import things, and they need to load them in their own environment, potentially in an insecure or inconsistent manner. It would be great if this were not the case, but sometimes things are out of our own immediate control, and implementing processes in larger organizations takes time.

Are there any blocking reasons that import cannot be refactored to support data and local dependencies in provider initialization? Would the team accept a patch for this for the 0.12 release, or would it be deferred until after 0.12?

chris-brace commented 5 years ago

It needs to go further than just data and local. It needs to support references to other resources as well.

This is a massive pain for me because I'm using an EKS cluster and RDS together. This means that I have to parameterize the kubernetes provider (there's literally no way around this and there never will be.) The reason I need to import state is because restoring an RDS cluster from a snapshot makes a NEW cluster, so once thats done i need to replace the current aws_rds_instance resource with the newly created one by name.

I'm forced to interact with these two services, and terraform is the least painful way of doing so. The way they work will never change so the way I see it Terraform needs to change to deal with the correct administration of things like EKS and RDS.

tmccombs commented 5 years ago

I just ran into this. In my case I have a module that exports account ids for various AWS accounts as outputs. Then my provider config references those account_ids in an assume_role block. Interestingly, it did seem to use the profile specified in the provider, but just ignored the assume_role block. And the only error I got was an unhelpful "permission denied".

ToonSpinISAAC commented 4 years ago

I have just run into this as well because we've created a reasonably large repo, so plans take 20 minutes to complete, even when planning only a submodule. Because of a production incident, I decided to change a resource from the console, figuring that a plan takes much too long, and it would be easy to import later because it's just a single simple resource - I was wrong because here I am.

As for the recommendation being that credentials are set in the environment, we exclusively keep our AWS credentials in environment variables, and again, here I am, so maybe folks can take this as a caveat that moving your credentials from Vault to the environment may in fact not fix this issue for you.

GitHub issues that are raised about the poor Terraform plan performance get handwaved away, because the Terraform team says the performance is due to the great number of AWS API commands, where in fact most of the time is spent before the first "Refreshing Terraform state in-memory prior to plan..." output. Attempts of folks to explain this to the team had failed last time I checked (which admittedly was a while ago).

So now I am finding myself in the situation where I either have to jump through hoops like the one @arocki7 mentioned (thanks btw! I'll give it a shot) or I have to take half an hour or longer to nuke and then replace the resource.

ToonSpinISAAC commented 4 years ago

@jbardin, the terraform import documentation is the wrong place to document this behavior. You want to document this in the provider documentation, because people only run into this when it's much too late: they need to know about this when creating their very first provider. Is that something I can bring to the team's attention somehow?

sudoforge commented 4 years ago

@ToonSpinISAAC:

As for the recommendation being that credentials are set in the environment, we exclusively keep our AWS credentials in environment variables, and again, here I am, so maybe folks can take this as a caveat that moving your credentials from Vault to the environment may in fact not fix this issue for you.


It's important to note that there is a difference between what Terraform does, and what any given Provider does. In this case, Terraform can (and does) take environment variables starting with a TF_VAR_ prefix and map them to any variable of the same name (without the prefix).

Some Providers expect (or rather, require) the user to configure credentials within the provider{} block. A good example of this is the Datadog Provider -- you can use the TF_VAR_ trick I described above, but within the provider block, you need to pass those variables in, e.g.:

$ export TF_VAR_my_datadog_api_key="1234....."

$ export TF_VAR_my_datadog_app_key="cccwwwasd...."

$ cat example.tf

variable my_datadog_api_key {}
variable my_datadog_app_key {}

provider datadog {
    api_key = var.my_datadog_api_key
    app_key = var.my_datadog_app_key
}

Others, like the AWS Provider, support additional methods for authentication, such as reading environment variables, or reading from a shared credentials file, e.g. ~/.aws/credentials. The AWS Provider documentation for Authentication would be a good place to get more information about the different ways the AWS Provider can consume authentication information.

This particular issue was opened because the author wasn't able to reference data properties within a provider{} block, as the example at the bottom of the first comment in this thread can show you. This is a Terraform issue, because of the way that providers are currently initialized.


@ToonSpinISAAC

GitHub issues that are raised about the poor Terraform plan performance get handwaved away [...]

I've worked with a lot of different teams over the past few years and have seen all manner of "slow plan performance" -- from 5 minutes to close to an hour. It's important to note that while Terraform might be able to do some things to increase performance here, the bulk of that time is spent fetching external resource metadata. This is a result of keeping many different resources in the same state file, to which the onus and ownership falls on the user.

Described another way,, slow planning means there are a lot of resources in the same state file. This is not an inherent flaw or issue in Terraform, but with how the user has decided to use Terraform. You can usually get around that slow planning with good -target usage, but restructing how you use Terraform might be the better long-term solution. Some examples of good patterns here include isolating the resources you keep in any given state file; by environment, by purpose, etc.

sudoforge commented 4 years ago

where in fact most of the time is spent before the first "Refreshing Terraform state in-memory prior to plan..." output.

Actually, it looks like I missed that little bit of your comment. When I've seen slowness here, it's usually due to using remote state; the "slowness" is from the connection to the state -- in the SSL handshake, in the GET request, or some other such issue. I wasn't immediately able to find a thread that was focused around this; could you open a new issue or ping me on one that provides more information?

tmccombs commented 4 years ago

the "slowness" is from the connection to the state -- in the SSL handshake, in the GET request, or some other such issue.

I've seen some slowness here as well. It isn't unbearable yet, but I suspect it has to do with the size of the state file.

sudoforge commented 4 years ago

That's definitely part of it, @tmccombs. I'd suggest that this is pulled out into another issue to keep things clean for the maintainers.

gchek commented 4 years ago

When doing terraform import, is Terraform responsible to set provider parameters or is it the provider itself. Example - my provider use host parameter. When host is coded host="abcd" import is fine but when host is coded host=data.terraform_remote_state.phase1.outputs.abcd import fails

rehevkor5 commented 4 years ago

This is similar to https://github.com/hashicorp/terraform/issues/13018

weakcamel commented 3 years ago

@jbardin

Sorry about the delay here. This issue is something we are aware of, and will be addressing when possible. Once we have 0.12 complete, I don't think it will be too difficult to get these evaluations to work.

It's free software so we (or at least me, user of the free edition) get that - you get what you get, and when you get it :-)

Now that 0.12 and 0.13 is out, it would though be very appreciated to hear what's the future of this one, if possible.

Thank you in advance!

jbardin commented 3 years ago

Hi @weakcamel

This does actually work in the latest release, as long as the referenced data sources have already been applied. Ensuring the existing resources are in the state could have been done before adding the resource to be imported, or via a targeted apply.

Since we are not going to be evaluating newly added resources or data sources during import, there is no way to improve this further given the current method for importing. We do have a long term goal of creating a more flexible import process, so I'm going to close this in favor of tracking one of the more general requests for a new workflow, of which #26364 most closely follows our current design plans.

Thanks!

ghost commented 3 years ago

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.