Open qaz-t opened 3 months ago
v2.7.9
Helm Chart
RKE2
v1.26.15+rke2r1
Downstream
Custom
I defined a custom rke2 cluster using the following config:
resource "rancher2_cluster_v2" "custom_cluster" { name = var.cluster_name kubernetes_version = var.kubernetes_version cluster_agent_deployment_customization {} fleet_agent_deployment_customization {} rke_config { chart_values = <<EOF rke2-cilium: {} EOF machine_global_config = <<EOF cni: "cilium" disable-kube-proxy: false etcd-expose-metrics: false profile: null EOF machine_selector_config { config = <<EOF protect-kernel-defaults: false system-default-registry: docker.io EOF } upgrade_strategy { control_plane_concurrency = "1" control_plane_drain_options { delete_empty_dir_data = true disable_eviction = false enabled = false force = false grace_period = -1 ignore_daemon_sets = true ignore_errors = false skip_wait_for_delete_timeout_seconds = 0 timeout = 120 } worker_concurrency = "1" worker_drain_options { delete_empty_dir_data = true disable_eviction = false enabled = true force = false grace_period = -1 ignore_daemon_sets = true ignore_errors = false skip_wait_for_delete_timeout_seconds = 0 timeout = 120 } } etcd { snapshot_schedule_cron = "0 */5 * * *" snapshot_retention = 5 } } }
Apply is complete, then I get the cluster yaml file in rancher UI, I got:
upgradeStrategy: controlPlaneConcurrency: '1' controlPlaneDrainOptions: deleteEmptyDirData: true disableEviction: false enabled: false force: false gracePeriod: 0 ignoreDaemonSets: true ignoreErrors: false postDrainHooks: null preDrainHooks: null skipWaitForDeleteTimeoutSeconds: 0 timeout: 120 workerConcurrency: '1' workerDrainOptions: deleteEmptyDirData: true disableEviction: false enabled: true force: false gracePeriod: 0 ignoreDaemonSets: true ignoreErrors: false postDrainHooks: null preDrainHooks: null skipWaitForDeleteTimeoutSeconds: 0 timeout: 120
gracePeriod is 0 instead of -1
gracePeriod
0
-1
just apply the rancher2_cluster_v2 resource provide above.
gracePeriod should be -1 like I set in config ranther than 0
in rancher cluster yaml file, gracePeriod: -1 should be there
gracePeriod: -1
If I directly create custom rke2 cluster in rancher UI, the gracePeriod value can be set to -1(default value is -1)
Rancher Server Setup
v2.7.9
Helm Chart
RKE2
v1.26.15+rke2r1
Information about the Cluster
v1.26.15+rke2r1
Downstream
Custom
I defined a custom rke2 cluster using the following config:
Apply is complete, then I get the cluster yaml file in rancher UI, I got:
gracePeriod
is0
instead of-1
To Reproduce
just apply the rancher2_cluster_v2 resource provide above.
Actual Result
gracePeriod
should be-1
like I set in config ranther than 0Expected Result
in rancher cluster yaml file,
gracePeriod: -1
should be thereScreenshots
Additional context
If I directly create custom rke2 cluster in rancher UI, the gracePeriod value can be set to -1(default value is -1)