-
### Description
I am currently on a fresh install with 3 control plane nodes and 4 worker nodes. Cilium and wireguard enabled. Cilium 1.14.x on latest channel so kube 1.28
After a node reboot by…
-
### Description
i have a cluster running with 1 control plane node, and 2 workers
cluster has been updated frequently using
- terraform init --upgrade
- terraform apply --auto-approve
but we…
-
**Rancher Server Setup**
- Rancher version: v2.6.7
- Installation option (Docker install/Helm Chart): Docker
- Proxy/Cert Details: no proxy, self-sigend cert
**Describe the bug**
rancher with…
-
**Rancher Server Setup**
- Rancher version: `v2.6.5-rc8`
- Installation option (Docker install/Helm Chart): Helm Chart
- If Helm Chart, Kubernetes Cluster and version (RKE1, RKE2, k3s, EKS, etc)…
-
I have created a module for Rancher2 to be used in both an AWS and an OnPrem solution that uses Vsphere. The module I have created uses ssh_resource without any sort of problems in AWS. Vsphere howeve…
-
### Description
either kube-apiserver or coredns are causing these issues. Which, I honestly am not sure about how to debug. What basically happens is every now and then all DNS requests fail. This h…
-
### Description
Since I wanted to have different VMs for the control plane after initial installation I created i new control plane node pool with the desired node sizes, did TF apply until the new n…
-
**Rancher Server Setup**
- Rancher version: 2.6.3
- Installation option (Docker install/Helm Chart): Helm Chart
- If Helm Chart, Kubernetes Cluster and version (RKE1, RKE2, k3s, EKS, etc): RKE2
…
-
## Description
When performing a terraform plan with the new `15.2.0` module, it wants to destroy and recreate the worker groups due to addition of `metadata_options` block. This was added in this …
-
Internal reference: SURE-2983
Reported in 2.5.8
Issue description:
Using the Rancher 2 Terraform provider to provision a Node Template, to build vSphere node driver-based cluster.
Two issues showe…