Open zonybob opened 1 year ago
Hi @zonybob!
We need a few more clarifications before we can look into this.
module.kubic_crds
before running the import step?Thanks!
@alexsomesan thanks for the quick reply!
This was intially a completely fresh installation. There was zero state when the imports were first run. I have since reproduced the import/plan error with other state present as well.
When the import occurs, there was nothing concerning; the output was normal.
$ terraform import 'module.kubic_crds.kubernetes_manifest.worker_machine_set["a"]' "apiVersion=machine.openshift.io/v1beta1,kind=MachineSet,namespace=openshift-machine-api,name=bs-ops-worker-us-east-1a"
module.kubic_crds.kubernetes_manifest.worker_machine_set["a"]: Importing from ID "apiVersion=machine.openshift.io/v1beta1,kind=MachineSet,namespace=openshift-machine-api,name=bs-ops-worker-us-east-1a"...
module.kubic_crds.kubernetes_manifest.worker_machine_set["a"]: Import prepared!
Prepared kubernetes_manifest for import
module.kubic_crds.kubernetes_manifest.worker_machine_set["a"]: Refreshing state...
Import successful!
The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.
β·
β Warning: Apply needed after 'import'
β
β Please run apply after a successful import to realign the resource state to the configuration in Terraform.
β΅
Releasing state lock. This may take a few moments...
When the plan occurs, the error is ...
β·
β Error: Instance cannot be destroyed
β
β on ../modules/kubic-crds/machine.tf line 7:
β 7: resource "kubernetes_manifest" "worker_machine_set" {
β
β Resource module.kubic_crds.kubernetes_manifest.worker_machine_set["a"] has
β lifecycle.prevent_destroy set, but the plan calls for this resource to be
β destroyed. To avoid this error and continue with the plan, either disable
β lifecycle.prevent_destroy or reduce the scope of the plan using the -target
β flag.
β΅
ERRO[0061] Hit multiple errors:
Hit multiple errors:
exit status 1
I'm seeing a similar issue, but even with some resources that are created by Terraform (on 2.13.1):
Terraform output of a secondary plan run after the initial plan/apply.
Terraform detected the following changes made outside of Terraform since the
last "terraform apply":
# kubernetes_manifest.elasticsearch["elasticsearch"] has changed
~ resource "kubernetes_manifest" "elasticsearch" {
~ object = {
~ metadata = {
~ annotations = null -> {
+ "eck.k8s.elastic.co/orchestration-hints" = jsonencode(
{
+ no_transient_settings = true
+ service_accounts = true
}
)
+ "elasticsearch.k8s.elastic.co/cluster-uuid" = "kq-OmZ83Qj259Q6sTjgA6A"
}
name = "elasticsearch"
# (14 unchanged elements hidden)
}
~ spec = {
~ nodeSets = [
~ {
name = "rack1"
~ podTemplate = {
+ metadata = {
+ creationTimestamp = null
}
~ spec = {
~ initContainers = [
~ {
name = "sysctl"
+ resources = {}
# (2 unchanged elements hidden)
},
]
# (4 unchanged elements hidden)
}
}
# (3 unchanged elements hidden)
},
~ {
name = "rack2"
~ podTemplate = {
+ metadata = {
+ creationTimestamp = null
}
~ spec = {
~ initContainers = [
~ {
name = "sysctl"
+ resources = {}
# (2 unchanged elements hidden)
},
]
# (4 unchanged elements hidden)
}
}
# (3 unchanged elements hidden)
},
~ {
name = "rack5"
~ podTemplate = {
+ metadata = {
+ creationTimestamp = null
}
~ spec = {
~ initContainers = [
~ {
name = "sysctl"
+ resources = {}
# (2 unchanged elements hidden)
},
]
# (4 unchanged elements hidden)
}
}
# (3 unchanged elements hidden)
},
]
# (13 unchanged elements hidden)
}
# (2 unchanged elements hidden)
}
# (2 unchanged attributes hidden)
# (1 unchanged block hidden)
}
...
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
-/+ destroy and then create replacement
Terraform will perform the following actions:
# kubernetes_manifest.elasticsearch["elasticsearch"] must be replaced
-/+ resource "kubernetes_manifest" "elasticsearch" {
- computed_fields = [
- "metadata.annotations",
- "spec.nodeSets.podTemplate.metadata.creationTimestamp",
] -> null
~ manifest = {
~ spec = {
~ nodeSets = [
~ {
name = "rack1"
~ podTemplate = {
~ spec = {
~ containers = [
~ {
name = "elasticsearch"
~ resources = {
~ requests = {
~ cpu = "2" -> "1"
# (1 unchanged element hidden)
}
# (1 unchanged element hidden)
}
# (1 unchanged element hidden)
},
]
# (4 unchanged elements hidden)
}
}
# (3 unchanged elements hidden)
},
...
Note that Terraform wants to replace this resource even though nothing of significance has changed with it.
Terraform file:
resource "kubernetes_manifest" "elasticsearch" {
depends_on = [
...
]
for_each = local.ELASTICSEARCH_CONFIG
manifest = {
apiVersion = local.ELASTICSEARCH_API_VERSION
kind = local.ELASTICSEARCH_KIND
metadata = {
name = each.key
namespace = var.ELASTICSEARCH_NAMESPACE
}
spec = {
version = var.ELASTICSEARCH_VERSION
updateStrategy = each.value.updateStrategy
volumeClaimDeletePolicy = each.value.volumeClaimDeletePolicy
secureSettings = local.ELASTICSEARCH_SECURE_SETTINGS
http = {
service = {
spec = {
selector = each.value.http.service.spec.selector
}
}
tls = {
certificate = {
secretName = format("%s%s", "elasticsearch-cert", var.ENVIRONMENT_SUB_NAME != null ? "-${var.ENVIRONMENT_SUB_NAME}" : "")
}
}
}
podDisruptionBudget = {}
nodeSets = [
for node in each.value.nodeSets : {
name = node.name
count = node.count
config = try(merge(node.config, each.value.baseNodeConfig), each.value.baseNodeConfig)
podTemplate = node.podTemplate
volumeClaimTemplates = try(node.volumeClaimTemplates, null)
}
]
}
}
field_manager {
force_conflicts = true
}
}
Check if the CRD has a schema. If it doesn't, any change requires a replacement.
In the rest of the diff (where you've cut it off with ...
), something should be marked with forces replacement
, can you paste the rest of the diff?
Below is the full output of the terraform plan command, but there is no mention of forces replacement
.
Just a quick update here, I did some further testing and it seems like any time there is a change under manifest.spec.(daemonset|deployment).podTemplate
a forces replacement
happens. Would anyone know where in the CDR it would tell Terraform that changes here require a replacement?
It's because podTemplate
has x-kubernetes-preserve-unknown-fields: true
which effectively makes it schemaless.
Check if the CRD has a schema. If it doesn't, any change requires a replacement.
Is this still the case? Is there any workaround? I'm really at a loss for how to import and manage certain core types that cannot be destroyed and have schemaless sub-objects.
This should be marked as a duplicate of #1928
Here is the definition of the MachineSet CRD. you can see that there is x-kubernetes-preserve-unknown-fields
Terraform Version, Provider Version and Kubernetes Version
Effectively duplicate of #1712 but on v2.12.1
Affected Resource(s)
Terraform Configuration Files
worker-machine-set.yaml
Debug Output
Panic Output
Steps to Reproduce
terraform import 'module.kubic_crds.kubernetes_manifest.worker_machine_set["a"]' "apiVersion=machine.openshift.io/v1beta1,kind=MachineSet,namespace=openshift-machine-api,name=bs-ops-worker-us-east-1a"
terragrunt plan -target "module.kubic_crds"
Expected Behavior
The plan should update the resource. This is the behavior I see with v2.8.0 of the provider.
Actual Behavior
The plan attempts to recreate the resource... which in this case is not allowed because
lifecycle.prevent_destroy
istrue
Important Factoids
Terragrunt is used to wrap the calls to terraform, but it is still terraform that is ultimately being called. Terragrunt simply provides me with a couple of pre/post hooks to run scripts.
References
1712
Community Note