-
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Version
higher than v1.16.0 and lower than v1.17.0
### What happened?
Recently I upgrade my homelab k3s clus…
-
## Describe the bug
I just tried to upgrade to 1.7.1 and all volumes with strict-local stay in upgrading engine mode. There is one additional replica for each of them in Failed mode. Updating "Upda…
-
Update - Only affects k3s types, not all imported clusters
Update 2 - Only affects the cluster config page. Removing user roles is still possible via the Cluster Explorer / RBAC / Cluster Members lis…
-
As for https://github.com/rancher/dashboard/issues/7395, once the PR is merged, we should implement an E2E test.
This is about adding a simple e2e test to validate that RKE2/k3s is the default on t…
-
**Setup**
- Rancher version: `v2.6.4-rc10`
- Browser type & version: `Chrome`
**Describe the bug**
An error appears in UI when upgrading k8s version in local k3s cluster
**To Reproduce**
1…
-
**Rancher Server Setup**
- Rancher version: v2.9.1
- Installation option (Docker install/Helm Chart):
- If Helm Chart, Kubernetes Cluster and version (RKE1, RKE2, k3s, EKS, etc):
- Proxy/Cert D…
-
**Environmental Info:**
K3s Version: latest/stable
Node(s) CPU architecture, OS, and Version:
*
Cluster Configuration:
*
**Describe the bug:**
We cannot use the convenience script …
-
# Description
Error log as below, looks like it's timeout issue, please help to advice how to configure that.
```
{"level":"error","ts":"2024-08-21T01:14:01Z","msg":"failed to complete action","c…
-
## Describe the bug
Share manager pods are periodically getting stuck in a crash cycle. This seems to affect all the share manager pods on a single node at a time. This has happened on two nodes out …
-
### Description
I just came back to work after ~2 weeks and one of my nodes booted into emergency mode after failing cloud-init.
After pressing "Enter", the node seems to retry cloud-init, this t…
maaft updated
2 weeks ago