Closed 9numbernine9 closed 1 year ago
Hi @9numbernine9
Thanks for reporting this!
This is odd, the clusterrole that gives the K8up SA it's permissions hasn't been touched between the helm chart 4.1.0 and 4.2.0.
After the update, do the k8up-edit
, k8up-manager
and k8up-view
clusterroles still exist?
Can you also post the values.yaml
you're using to deploy K8up?
HI @Kidswiss !
The k8up-edit
, k8up-manager
and k8up-view
cluster roles are still there. (Just for fun, I uninstalled and reinstalled the Helm chart and verified that they were re-created.) The cluster roles that are created look like this:
The values.yaml
file I'm providing is pretty basic (I'm deploying this with Terraform FWIW):
k8up:
envVars:
- name: BACKUP_GLOBALREPOPASSWORD
valueFrom:
secretKeyRef:
key: RESTIC_PASSWORD
name: restic
The RBAC rules look okay.
Could you please also check if the serviceaccount on the K8up pod has an actual clusterrolebinding on the k8up-io-manager
clusterrole?
Hmm, I'm not an expert at Kubernetes permissions but here we go. 😄
The pod refers to the k8up-io
ServiceAccount
:
The k8up-io
ServiceAccount
is defined as:
apiVersion: v1
kind: ServiceAccount
metadata:
annotations:
meta.helm.sh/release-name: k8up-io
meta.helm.sh/release-namespace: k8up
creationTimestamp: "2023-04-03T10:16:51Z"
labels:
app.kubernetes.io/instance: k8up-io
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: k8up
helm.sh/chart: k8up-4.2.0
name: k8up-io
namespace: k8up
resourceVersion: "66601651"
uid: 48d18c16-edfe-47cb-9f4f-6766675e3a80
The ClusterRoleBinding
is defined as:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
meta.helm.sh/release-name: k8up-io
meta.helm.sh/release-namespace: k8up
creationTimestamp: "2023-04-03T10:16:51Z"
labels:
app.kubernetes.io/instance: k8up-io
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: k8up
helm.sh/chart: k8up-4.2.0
name: k8up-io
resourceVersion: "66601661"
uid: 7311f521-fdf0-4461-bdae-27410e41f5b3
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: k8up-io-manager
subjects:
- kind: ServiceAccount
name: k8up-io
namespace: k8up
And the k8up-io-manager
ClusterRole
:
The RBAC looks good.
Unfortunately I was not able to reproduce the issue locally. Which makes it hard to figure out what's wrong...
Some things you could also check:
Hi @Kidswiss - apologies for not writing back sooner.
To answer your questions:
I've been trying to find some time to replicate my issue from a clean slate, and I didn't have time until this weekend. With that said: I've tried again a few times from a clean slate each time, and the steps below are how I've been able to replicate it.
❯ kubectl create ns k8up
namespace/k8up created
Secret
in our namespace that contains the password we'll use for backups.❯ cat restic.yaml
apiVersion: v1
data:
RESTIC_PASSWORD: <base64-password goes here>
kind: Secret
metadata:
name: restic
namespace: k8up
type: Opaque
❯ kubectl apply -f restic.yaml
secret/restic created
❯ kubectl apply -f https://github.com/k8up-io/k8up/releases/download/k8up-4.2.0/k8up-crd.yaml
customresourcedefinition.apiextensions.k8s.io/archives.k8up.io created
customresourcedefinition.apiextensions.k8s.io/backups.k8up.io created
customresourcedefinition.apiextensions.k8s.io/checks.k8up.io created
customresourcedefinition.apiextensions.k8s.io/prebackuppods.k8up.io created
customresourcedefinition.apiextensions.k8s.io/prunes.k8up.io created
customresourcedefinition.apiextensions.k8s.io/restores.k8up.io created
customresourcedefinition.apiextensions.k8s.io/schedules.k8up.io created
customresourcedefinition.apiextensions.k8s.io/snapshots.k8up.io created
❯ helm repo add k8up-io https://k8up-io.github.io/k8up
"k8up-io" has been added to your repositories
❯ cat values.yaml
k8up:
envVars:
- name: BACKUP_GLOBALREPOPASSWORD
valueFrom:
secretKeyRef:
key: RESTIC_PASSWORD
name: restic
❯ helm install k8up k8up-io/k8up --namespace k8up --values values.yaml
NAME: k8up
LAST DEPLOYED: Thu Apr 6 06:27:27 2023
NAMESPACE: k8up
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
#####################
! Attention !
#####################
This Helm chart does not include CRDs.
Please make sure you have installed or upgraded the necessary CRDs as instructed in the Chart README.
#####################
Pod
created by the Helm chart looks reasonable.❯ kubectl get pods -n k8up
NAME READY STATUS RESTARTS AGE
k8up-7dc796f59d-n8pbx 1/1 Running 0 22h
❯ kubectl describe pod -n k8up k8up-7dc796f59d-n8pbx
Name: k8up-7dc796f59d-n8pbx
Namespace: k8up
Priority: 0
Service Account: k8up
Node: k3s01/192.168.10.31
Start Time: Sun, 09 Apr 2023 11:29:52 -0400
Labels: app.kubernetes.io/instance=k8up
app.kubernetes.io/name=k8up
pod-template-hash=7dc796f59d
Annotations: <none>
Status: Running
IP: 10.0.0.194
IPs:
IP: 10.0.0.194
IP: fd00::16
Controlled By: ReplicaSet/k8up-7dc796f59d
Containers:
k8up-operator:
Container ID: containerd://a1f3067cd3ee260f2bcf35d544f75900efc863cf144f40afe79fb1630a521ac3
Image: ghcr.io/k8up-io/k8up:v2.7.0
Image ID: ghcr.io/k8up-io/k8up@sha256:e73dc644d9af02d0093905bff8591cd9039a0ad5058c534c421deb3a466f6610
Port: 8080/TCP
Host Port: 0/TCP
Args:
operator
State: Running
Started: Sun, 09 Apr 2023 11:29:54 -0400
Ready: True
Restart Count: 0
Limits:
memory: 256Mi
Requests:
cpu: 20m
memory: 128Mi
Liveness: http-get http://:http/metrics delay=30s timeout=1s period=10s #success=1 #failure=3
Environment:
BACKUP_IMAGE: ghcr.io/k8up-io/k8up:v2.7.0
BACKUP_ENABLE_LEADER_ELECTION: true
BACKUP_OPERATOR_NAMESPACE: k8up (v1:metadata.namespace)
BACKUP_GLOBALREPOPASSWORD: <set to the key 'RESTIC_PASSWORD' in secret 'restic'> Optional: false
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tn99b (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-tn99b:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
Schedule
to perform our backups.❯ cat schedule.yaml
apiVersion: k8up.io/v1
kind: Schedule
metadata:
name: example-scheduled-backup
namespace: example
spec:
backend:
rest:
url: https://restic-url-goes-here.com
backup:
schedule: '*/5 0 * * *'
❯ k apply -f schedule.yaml
schedule.k8up.io/example-scheduled-backup created
2023-04-09T15:29:54Z INFO k8up Starting k8up… {"version": "2.7.0", "date": "2023-03-30T12:16:39Z", "commit": "8f203d75eaa6826405e0c288d2d0915fa6c53e79", "go_os": "linux", "go_arch": "amd64", "go_version": "go1.19.7", "uid": 65532, "gid": 0}
2023-04-09T15:29:54Z INFO k8up.operator initializing
2023-04-09T15:29:54Z INFO k8up.operator.controller-runtime.metrics Metrics server is starting to listen {"addr": ":8080"}
2023-04-09T15:29:54Z INFO k8up.operator Starting server {"path": "/metrics", "kind": "metrics", "addr": "[::]:8080"}
I0409 15:29:54.783793 1 leaderelection.go:248] attempting to acquire leader lease k8up/d2ab61da.syn.tools...
I0409 15:30:10.868721 1 leaderelection.go:258] successfully acquired lease k8up/d2ab61da.syn.tools
2023-04-09T15:30:10Z INFO k8up.operator Starting EventSource {"controller": "archive.k8up.io", "controllerGroup": "k8up.io", "controllerKind": "Archive", "source": "kind source: *v1.Archive"}
2023-04-09T15:30:10Z INFO k8up.operator Starting EventSource {"controller": "archive.k8up.io", "controllerGroup": "k8up.io", "controllerKind": "Archive", "source": "kind source: *v1.Job"}
2023-04-09T15:30:10Z INFO k8up.operator Starting Controller {"controller": "archive.k8up.io", "controllerGroup": "k8up.io", "controllerKind": "Archive"}
2023-04-09T15:30:10Z INFO k8up.operator Starting EventSource {"controller": "restore.k8up.io", "controllerGroup": "k8up.io", "controllerKind": "Restore", "source": "kind source: *v1.Restore"}
2023-04-09T15:30:10Z INFO k8up.operator Starting EventSource {"controller": "prune.k8up.io", "controllerGroup": "k8up.io", "controllerKind": "Prune", "source": "kind source: *v1.Prune"}
2023-04-09T15:30:10Z INFO k8up.operator Starting EventSource {"controller": "restore.k8up.io", "controllerGroup": "k8up.io", "controllerKind": "Restore", "source": "kind source: *v1.Job"}
2023-04-09T15:30:10Z INFO k8up.operator Starting EventSource {"controller": "prune.k8up.io", "controllerGroup": "k8up.io", "controllerKind": "Prune", "source": "kind source: *v1.Job"}
2023-04-09T15:30:10Z INFO k8up.operator Starting Controller {"controller": "prune.k8up.io", "controllerGroup": "k8up.io", "controllerKind": "Prune"}
2023-04-09T15:30:10Z INFO k8up.operator Starting Controller {"controller": "restore.k8up.io", "controllerGroup": "k8up.io", "controllerKind": "Restore"}
2023-04-09T15:30:10Z INFO k8up.operator Starting EventSource {"controller": "backup.k8up.io", "controllerGroup": "k8up.io", "controllerKind": "Backup", "source": "kind source: *v1.Backup"}
2023-04-09T15:30:10Z INFO k8up.operator Starting EventSource {"controller": "check.k8up.io", "controllerGroup": "k8up.io", "controllerKind": "Check", "source": "kind source: *v1.Check"}
2023-04-09T15:30:10Z INFO k8up.operator Starting EventSource {"controller": "check.k8up.io", "controllerGroup": "k8up.io", "controllerKind": "Check", "source": "kind source: *v1.Job"}
2023-04-09T15:30:10Z INFO k8up.operator Starting Controller {"controller": "check.k8up.io", "controllerGroup": "k8up.io", "controllerKind": "Check"}
2023-04-09T15:30:10Z INFO k8up.operator Starting EventSource {"controller": "backup.k8up.io", "controllerGroup": "k8up.io", "controllerKind": "Backup", "source": "kind source: *v1.Job"}
2023-04-09T15:30:10Z INFO k8up.operator Starting Controller {"controller": "backup.k8up.io", "controllerGroup": "k8up.io", "controllerKind": "Backup"}
2023-04-09T15:30:10Z INFO k8up.operator Starting EventSource {"controller": "schedule.k8up.io", "controllerGroup": "k8up.io", "controllerKind": "Schedule", "source": "kind source: *v1.Schedule"}
2023-04-09T15:30:10Z INFO k8up.operator Starting Controller {"controller": "schedule.k8up.io", "controllerGroup": "k8up.io", "controllerKind": "Schedule"}
2023-04-09T15:30:10Z INFO k8up.operator Starting workers {"controller": "prune.k8up.io", "controllerGroup": "k8up.io", "controllerKind": "Prune", "worker count": 1}
2023-04-09T15:30:10Z INFO k8up.operator Starting workers {"controller": "schedule.k8up.io", "controllerGroup": "k8up.io", "controllerKind": "Schedule", "worker count": 1}
2023-04-09T15:30:10Z INFO k8up.operator Starting workers {"controller": "check.k8up.io", "controllerGroup": "k8up.io", "controllerKind": "Check", "worker count": 1}
2023-04-09T15:30:10Z INFO k8up.operator Starting workers {"controller": "archive.k8up.io", "controllerGroup": "k8up.io", "controllerKind": "Archive", "worker count": 1}
2023-04-09T15:30:10Z INFO k8up.operator Starting workers {"controller": "restore.k8up.io", "controllerGroup": "k8up.io", "controllerKind": "Restore", "worker count": 1}
2023-04-09T15:30:10Z INFO k8up.operator Starting workers {"controller": "backup.k8up.io", "controllerGroup": "k8up.io", "controllerKind": "Backup", "worker count": 1}
2023-04-10T00:00:00Z INFO k8up.operator.scheduler Running schedule {"cron": "*/5 0 * * *", "key": "example/example-scheduled-backup/backup"}
2023-04-10T00:00:00Z ERROR k8up.operator Reconciler error {"controller": "backup.k8up.io", "controllerGroup": "k8up.io", "controllerKind": "Backup", "Backup": {"name":"example-scheduled-backup-backup-vn47q","namespace":"example"}, "namespace": "example", "name": "example-scheduled-backup-backup-vn47q", "reconcileID": "2c412819-7c89-4c96-8560-7353ef8470e7", "error": "roles.rbac.authorization.k8s.io \"pod-executor\" is forbidden: User \"system:serviceaccount:k8up:k8up\" cannot update resource \"roles\" in API group \"rbac.authorization.k8s.io\" in the namespace \"example\""}
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.13.1/pkg/internal/controller/controller.go:326
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.13.1/pkg/internal/controller/controller.go:273
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2
/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.13.1/pkg/internal/controller/controller.go:234
2023-04-10T00:00:00Z ERROR k8up.operator Reconciler error {"controller": "backup.k8up.io", "controllerGroup": "k8up.io", "controllerKind": "Backup", "Backup": {"name":"example-scheduled-backup-backup-vn47q","namespace":"example"}, "namespace": "example", "name": "example-scheduled-backup-backup-vn47q", "reconcileID": "a68fffbe-5842-4376-aafc-a8d331901db2", "error": "roles.rbac.authorization.k8s.io \"pod-executor\" is forbidden: User \"system:serviceaccount:k8up:k8up\" cannot update resource \"roles\" in API group \"rbac.authorization.k8s.io\" in the namespace \"example\""}
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.13.1/pkg/internal/controller/controller.go:326
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.13.1/pkg/internal/controller/controller.go:273
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2
/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.13.1/pkg/internal/controller/controller.go:234
2023-04-10T00:00:00Z ERROR k8up.operator Reconciler error {"controller": "backup.k8up.io", "controllerGroup": "k8up.io", "controllerKind": "Backup", "Backup": {"name":"example-scheduled-backup-backup-vn47q","namespace":"example"}, "namespace": "example", "name": "example-scheduled-backup-backup-vn47q", "reconcileID": "b0a1495d-5feb-4f3d-b30d-e8ee19edba78", "error": "roles.rbac.authorization.k8s.io \"pod-executor\" is forbidden: User \"system:serviceaccount:k8up:k8up\" cannot update resource \"roles\" in API group \"rbac.authorization.k8s.io\" in the namespace \"example\""}
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.13.1/pkg/internal/controller/controller.go:326
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.13.1/pkg/internal/controller/controller.go:273
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2
/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.13.1/pkg/internal/controller/controller.go:234
2023-04-10T00:00:00Z ERROR k8up.operator Reconciler error {"controller": "backup.k8up.io", "controllerGroup": "k8up.io", "controllerKind": "Backup", "Backup": {"name":"example-scheduled-backup-backup-vn47q","namespace":"example"}, "namespace": "example", "name": "example-scheduled-backup-backup-vn47q", "reconcileID": "17c7bb88-626c-4993-8edd-bd17f00af2e3", "error": "roles.rbac.authorization.k8s.io \"pod-executor\" is forbidden: User \"system:serviceaccount:k8up:k8up\" cannot update resource \"roles\" in API group \"rbac.authorization.k8s.io\" in the namespace \"example\""}
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.13.1/pkg/internal/controller/controller.go:326
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.13.1/pkg/internal/controller/controller.go:273
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2
/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.13.1/pkg/internal/controller/controller.go:234
2023-04-10T00:00:00Z ERROR k8up.operator Reconciler error {"controller": "backup.k8up.io", "controllerGroup": "k8up.io", "controllerKind": "Backup", "Backup": {"name":"example-scheduled-backup-backup-vn47q","namespace":"example"}, "namespace": "example", "name": "example-scheduled-backup-backup-vn47q", "reconcileID": "115903d7-fe33-4def-985a-46025b573978", "error": "roles.rbac.authorization.k8s.io \"pod-executor\" is forbidden: User \"system:serviceaccount:k8up:k8up\" cannot update resource \"roles\" in API group \"rbac.authorization.k8s.io\" in the namespace \"example\""}
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.13.1/pkg/internal/controller/controller.go:326
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.13.1/pkg/internal/controller/controller.go:273
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2
/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.13.1/pkg/internal/controller/controller.go:234
2023-04-10T00:00:00Z ERROR k8up.operator Reconciler error {"controller": "backup.k8up.io", "controllerGroup": "k8up.io", "controllerKind": "Backup", "Backup": {"name":"example-scheduled-backup-backup-vn47q","namespace":"example"}, "namespace": "example", "name": "example-scheduled-backup-backup-vn47q", "reconcileID": "69231f9b-3f84-4925-a421-d6ef0e5d9e37", "error": "roles.rbac.authorization.k8s.io \"pod-executor\" is forbidden: User \"system:serviceaccount:k8up:k8up\" cannot update resource \"roles\" in API group \"rbac.authorization.k8s.io\" in the namespace \"example\""}
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.13.1/pkg/internal/controller/controller.go:326
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.13.1/pkg/internal/controller/controller.go:273
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2
/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.13.1/pkg/internal/controller/controller.go:234
2023-04-10T00:00:00Z ERROR k8up.operator Reconciler error {"controller": "backup.k8up.io", "controllerGroup": "k8up.io", "controllerKind": "Backup", "Backup": {"name":"example-scheduled-backup-backup-vn47q","namespace":"example"}, "namespace": "example", "name": "example-scheduled-backup-backup-vn47q", "reconcileID": "1a98d22b-56ba-49c6-bcc0-809f8a09dfbb", "error": "roles.rbac.authorization.k8s.io \"pod-executor\" is forbidden: User \"system:serviceaccount:k8up:k8up\" cannot update resource \"roles\" in API group \"rbac.authorization.k8s.io\" in the namespace \"example\""}
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.13.1/pkg/internal/controller/controller.go:326
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.13.1/pkg/internal/controller/controller.go:273
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2
/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.13.1/pkg/internal/controller/controller.go:234
2023-04-10T00:00:00Z ERROR k8up.operator Reconciler error {"controller": "backup.k8up.io", "controllerGroup": "k8up.io", "controllerKind": "Backup", "Backup": {"name":"example-scheduled-backup-backup-vn47q","namespace":"example"}, "namespace": "example", "name": "example-scheduled-backup-backup-vn47q", "reconcileID": "6e9f9f1e-6206-4c33-864e-f4420a39d20c", "error": "roles.rbac.authorization.k8s.io \"pod-executor\" is forbidden: User \"system:serviceaccount:k8up:k8up\" cannot update resource \"roles\" in API group \"rbac.authorization.k8s.io\" in the namespace \"example\""}
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.13.1/pkg/internal/controller/controller.go:326
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.13.1/pkg/internal/controller/controller.go:273
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2
/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.13.1/pkg/internal/controller/controller.go:234
I hope that's helpful! As I mentioned in my original comment, I'm running Kubernetes 1.26.3
via K3S (v1.26.3+k3s1
). Normally I manage the configure of my cluster using Terraform, but for the purposes of debugging I've installed and configured K8up manually to make sure that I'm not introducing some weird configuration via Terraform.
This is where things are a bit odd, because even though I've created a schedule that should run every five minutes
The schedule you created will only run every 5 minutes past 0, running it every 5 minutes would be */5 * * * *
:)
❯ kubectl describe pod -n k8up k8up-7dc796f59d-n8pbx
Name: k8up-7dc796f59d-n8pbx
Namespace: k8up
Priority: 0
Service Account: k8up
Here the service account seems to be named k8up
, but the rolebinding is for an account with the name k8up-io
. Can you manually edit the k8up-io
clusterrolebinding and change the referenced user to k8up
, then restart the pod.
The name of the service account gets taken from the release name by default. So helm install k8up k8up-io/k8up --namespace k8up --values values.yaml
should result in consistent naming of the service account and the bindings with the name k8up
.
This is where things are a bit odd, because even though I've created a schedule that should run every five minutes
The schedule you created will only run every 5 minutes past 0, running it every 5 minutes would be
*/5 * * * *
:)
Yup, I'm an idiot. I didn't even notice the 0
. 😂
Here the service account seems to be named
k8up
, but the rolebinding is for an account with the namek8up-io
. Can you manually edit thek8up-io
clusterrolebinding and change the referenced user tok8up
, then restart the pod.
I think the referenced account for the ClusterRoleBinding
already is k8up
?
❯ k describe clusterrolebinding k8up
Name: k8up
Labels: app.kubernetes.io/instance=k8up
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=k8up
helm.sh/chart=k8up-4.2.0
Annotations: meta.helm.sh/release-name: k8up
meta.helm.sh/release-namespace: k8up
Role:
Kind: ClusterRole
Name: k8up-manager
Subjects:
Kind Name Namespace
---- ---- ---------
ServiceAccount k8up k8up
Okay I'm starting to run out of ideas here :(
I've just tested k8up in a local k3d cluster with the exact same version you have, and I didn't run into any issues starting backups.
I've used this version:
k3d cluster create -i docker.io/rancher/k3s:v1.26.3-k3s1
For the installation I copy and pasted your exact commands from your comment. From what I've seen in your comments the RBAC and everything seems to be correct.
@Kidswiss I've done some more testing/debugging and here's what I've found so far:
I can use either Helm chart version 4.1.0
or 4.2.0
, but what really matters is the image tag. v2.6.0
works fine with either Helm chart, but v2.7.0
doesn't work with either chart. It seems to me like it's a change between v2.6.0
and v2.7.0
that's triggering my issue.
I can make it work with v2.7.0
(!) by editing the k8up-io-manager
ClusterRole
and granting it permission to update
roles. Essentially changing:
- verbs:
- create
- delete
- get
- list
- watch
apiGroups:
- rbac.authorization.k8s.io
resources:
- rolebindings
- roles
To this:
- verbs:
- create
- delete
- get
- list
- watch
- update # Hi, I'm new here!
apiGroups:
- rbac.authorization.k8s.io
resources:
- rolebindings
- roles
This makes the error go away and the scheduled backups now complete successfully. I humbly admit that I don't know enough about K8S RBAC to know if this change is a good idea or not.
Huh, interesting. Thanks for investigating!
I've just skimmed through the diff between 2.6 and 2.7. We're still doing the role logic the same way. The only thing that changed is that we added a new permission. That's where you issue comes from, and also why I was never able to reproduce it!
For K8up to be able to add the new permissions to an existing role, it actually needs this update permission. The bug never triggered for me as I've always tested against a fresh cluster.
So yes, the update is actually necessary, if people migrate from an older version. I'm opening a quick PR for that.
This hit me too today. @Kidswiss is an "emergency" release of the helmchart to 4.2.1 possible? In the meantime I will downgrade the operator to 2.6.0.
Hi @SchoolGuy
I'm currently waiting for some more reviews in https://github.com/k8up-io/k8up/pull/852 to make the whole RBAC system a bit more secure as well. I hope to get it done soon.
Description
Hello! 👋
Since upgrading to
2.7.0
(deployed via Helm chart4.2.0
) thek8up
controller generates the following errors whenever it prepares to run a backup:My backup configuration previously worked fine with
2.6.x
and Helm chart4.1.0
. I have reinstalled the CRDs as per the release notes but this doesn't seem to make a difference; it seems like something no longer has the correct permissions to execute the backup.Rolling back to
2.6.0
(Helm4.1.0
) works fine.I'm using a
ScheduledBackup
that looks like this:Additional Context
Kubernetes
1.26.3
(K3S1.26.3+k3s1
)Logs
No response
Expected Behavior
Backups in
2.7.0
continue to work. 😄Steps To Reproduce
No response
Version of K8up
2.7.0
Version of Kubernetes
1.26.3
Distribution of Kubernetes
K3S (
1.26.3+k3s1
)