Closed lots0logs closed 4 years ago
This doesn't seem a regression. Depending how the datasource and its dependencies are defined , this may be the expected datasource behaviour. Terraform advices that datasources are updated on refresh stage (before apply) unless using "known after apply" values on them. Your ff plan is saying module.k8s_cluster.data.rancher2_project.system will be read during apply (config refers to values not yet known)
.
Is the datasource using any "known after apply" value?? Have you tried to tf refresh the datasource before plan/apply?? Have you applied the plan and the diff is being showed again?? For some reason, your defined datasource seems to be using any "known after apply" value, and tf is updating it on apply stage instead of refresh stage, forcing the update of dependant resources.
I've made some tests with same tf version, using a similar config,
data "rancher2_project" "system" {
name = "System"
cluster_id = <id>
}
resource "rancher2_secret" "test" {
name = "test"
project_id = data.rancher2_project.system.id
data = {
address = "test"
}
}
The provider is working fine from previous v1.10.0 to current provider v1.10.3. Also tested provider upgrade, not generating any diff. As tf log shows, datasource is updated on refresh stage, before apply due to datasource values are known.
# terraform init
Initializing the backend...
Initializing provider plugins...
- Using previously-installed terraform-providers/rancher2 v1.10.0
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
# terraform apply
data.rancher2_project.system: Refreshing state...
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# rancher2_secret.test will be created
+ resource "rancher2_secret" "test" {
+ annotations = (known after apply)
+ data = (sensitive value)
+ id = (known after apply)
+ labels = (known after apply)
+ name = "test"
+ project_id = "<id>"
}
Plan: 1 to add, 0 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
rancher2_secret.test: Creating...
rancher2_secret.test: Creation complete after 6s [id=<id>]
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
# terraform init
Initializing the backend...
Initializing provider plugins...
- Finding terraform-providers/rancher2 versions matching "1.10.2"...
- Installing terraform-providers/rancher2 v1.10.2...
- Installed terraform-providers/rancher2 v1.10.2 (signed by HashiCorp)
Warning: Additional provider information from registry
The remote registry returned warnings for
registry.terraform.io/terraform-providers/rancher2:
- For users on Terraform 0.13 or greater, this provider has moved to
rancher/rancher2. Please update your source in required_providers.
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
# terraform apply
data.rancher2_project.system: Refreshing state... [id=<id>]
rancher2_secret.test: Refreshing state... [id=<id>]
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
#terraform init
Initializing the backend...
Initializing provider plugins...
- Finding terraform-providers/rancher2 versions matching "1.10.3"...
- Installing terraform-providers/rancher2 v1.10.3...
- Installed terraform-providers/rancher2 v1.10.3 (signed by HashiCorp)
Warning: Additional provider information from registry
The remote registry returned warnings for
registry.terraform.io/terraform-providers/rancher2:
- For users on Terraform 0.13 or greater, this provider has moved to
rancher/rancher2. Please update your source in required_providers.
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
# terraform apply
data.rancher2_project.system: Refreshing state... [id=<id>]
rancher2_secret.test: Refreshing state... [id=<id>]
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
BTW, as you are using system
project, have you consider to use rancher2_cluster.cluster.system_project_id
attribute instead of datasource??
I will look at this again on Monday and let you know. Thanks!
I think my conclusion is incorrect. IF there is a bug its not what I thought. I will open a new issue if needed once I get to the bottom of it.
@rawmind0 I have a hunch as to what could be the problem.
Here is the data source in question:
data "rancher2_project" "system" {
cluster_id = rancher2_cluster_sync.cluster.cluster_id
name = "System"
}
You are right that it is by design that Terraform is behaving this way since the value of the cluster_id
argument comes from the output of another resource. I wonder if its really necessary to require providing the cluster id though? Could that data provider simply take the name argument similar to how the rancher2_cluster
data provider works?
You are right that it is by design that Terraform is behaving this way since the value of the cluster_id argument comes from the output of another resource. I wonder if its really necessary to require providing the cluster id though? Could that data provider simply take the name argument similar to how the rancher2_cluster data provider works?
Nope, due to project is scoped within cluster. As mentioned, have you consider to use rancher2_cluster.cluster.system_project_id
attribute instead of datasource??
As mentioned, have you consider to use rancher2_cluster.cluster.system_project_id attribute instead of datasource??
Yes, thank you for that btw!
The project_id argument for the two resources is sourced from a rancher2_project data resource. It worked fine with previous versions of the provider. This is a recent regression.
cc: @rawmind0