Closed caiconkhicon closed 4 years ago
This action will fail for sure, because revision-2 is in-used (because of the 2nd step above).
@caiconkhicon , Have you tested it or just assuming it?? ;) You can remove old template revisions with the provider, but the feature was limited by the reason you mentiond, it's a list. Anyway, PR #395 includes a fix to address this issue.
Hi @rawmind0, yes, sure, it has happened to me many times. The part about "because revisions is treated as a list" is just my assumption/understanding, but others are facts from the output of Terraform. Ok, let me try your new patch. Thanks for your responsive support.
@rawmind0 : I tested. The deletion works. However, now I face another weird error with creation. If I add a new revision AFTER the current revision, everything is fine. However, if I add the new revision BEFORE the current revision, when "terraform apply", beside from adding the new revision, Terraform shows that many parameters of the current revision need to be changed:
# module.rancher.rancher2_cluster_template.default will be updated in-place
~ resource "rancher2_cluster_template" "default" {
annotations = {}
default_revision_id = "cattle-global-data:ctr-v48zp"
description = "K8S cluster template with hardening configuration"
id = "cattle-global-data:ct-tz9rw"
labels = {
"cattle.io/creator" = "norman"
}
name = "default-global"
members {
access_type = "owner"
}
~ template_revisions {
annotations = {}
~ cluster_template_id = "cattle-global-data:ct-tz9rw" -> (known after apply)
default = true
enabled = true
id = "cattle-global-data:ctr-v48zp"
~ labels = {
- "io.cattle.field/clusterTemplateId" = "ct-tz9rw" -> null
}
name = "1.17.2-2"
~ cluster_config {
default_pod_security_policy_template_id = "restricted"
~ docker_root_dir = "/var/lib/docker" -> (known after apply)
enable_cluster_alerting = false
enable_cluster_monitoring = false
enable_network_policy = false
windows_prefered_cluster = false
~ rke_config {
~ addon_job_timeout = 30 -> 0
addons_include = [
"https://raw.githubusercontent.com/jetstack/cert-manager/v0.12.0/deploy/manifests/00-crds.yaml",
]
ignore_docker_version = true
kubernetes_version = "v1.17.2-rancher1-2"
ssh_agent_auth = false
ingress {
extra_args = {}
node_selector = {}
options = {
"log-format-escape-json" = "true"
"log-format-upstream" = "{\\\"time\\\": \\\"$time_iso8601\\\", \\\"remote_addr\\\": \\\"$proxy_protocol_addr\\\", \\\"x-forward-for\\\": \\\"$proxy_add_x_forwarded_for\\\", \\\"request_id\\\": \\\"$req_id\\\", \\\"remote_user\\\": \\\"$remote_user\\\", \\\"bytes_sent\\\": $bytes_sent, \\\"request_time\\\": $request_time, \\\"status\\\":$status, \\\"vhost\\\": \\\"$host\\\", \\\"request_proto\\\": \\\"$server_protocol\\\", \\\"path\\\": \\\"$uri\\\", \\\"request_query\\\": \\\"$args\\\", \\\"request_length\\\": $request_length, \\\"duration\\\": $request_time,\\\"method\\\": \\\"$request_method\\\", \\\"http_referrer\\\": \\\"$http_referer\\\", \\\"http_user_agent\\\": \\\"$http_user_agent\\\" }"
"proxy-body-size" = "900m"
}
provider = "nginx"
}
network {
mtu = 0
options = {}
plugin = "canal"
}
~ services {
~ etcd {
~ creation = "12h" -> (known after apply)
external_urls = []
extra_args = {
"client-cert-auth" = "true"
"peer-client-cert-auth" = "true"
}
extra_binds = []
extra_env = []
gid = 0
~ retention = "72h" -> (known after apply)
snapshot = false
uid = 0
}
~ kube_api {
admission_configuration = {}
always_pull_images = false
extra_args = {}
extra_binds = []
extra_env = []
pod_security_policy = true
~ service_node_port_range = "30000-32767" -> (known after apply)
##### Many lines are omitted here #####
+ template_revisions {
+ annotations = (known after apply)
##### Many lines are omitted here #####
Because of that, the apply fails. Can you please check? Thanks.
@caiconkhicon , add the new revision BEFORE the current revision seems a new request and not required, due to you can add always at the end of the list. Anyway, PR has been updated and it should work now.
PR is merged to master. Please, reopen issue if needed.
@rawmind0 : Thanks for your fix. It works now.
With the current implementation of Rancher, when I want to use a new cluster template revision and decommission the old one, I must do 3 steps:
The 1st and 2nd steps can be done by both Rancher UI and with Terraform. However, the 3nd cannot be done with Terraform, because revisions is treated as a list, so when I add a new revision, the list is [revision-1,revision-2]. Then I remove the first one, it is [revision-2]. Thus, if I apply, Terraform understands that I want to remove the second entry, while update the first one. This action will fail for sure, because revision-2 is in-used (because of the 2nd step above).
Please implement a way to make this procedure possible, because it is a very common activity. Thanks a lot.