Closed jessebot closed 4 months ago
@jessebot Does this always happen? or with specific settings? like for example pre-pod backups or backup command annotations?
From the logs it looks like the error gets thrown during the sync operation after the backup. This should ensure that for each snapshot in the restic repository there should also be a snapshot object in that namespace.
Does it write these snapshots correctly?
Hi again!
@jessebot Does this always happen? or with specific settings? like for example pre-pod backups or backup command annotations?
Yes, this always happens to my knowledge, but I do always use backup annotations on all the PVCs I want to back up. Here's an example of the PVC:
PVC.yaml
Here's an example of a backup that I ran recently:
Backup.yaml
Here's the full backup pod log from top to bottom, from the above Backup
resource (I don't think there's any sensitive info here):
pod.log
Here's my k8up helm chart values (indentation is weird because it's passed into an Argo CD Application valuesObject
), perhaps most notable is k8up.skipWithoutAnnotation=True
:
helm_values.yaml
From the logs it looks like the error gets thrown during the sync operation after the backup. This should ensure that for each snapshot in the restic repository there should also be a snapshot object in that namespace.
Does it write these snapshots correctly?
Yes, they are written correctly actually! That's what's really weird. This also happens if there's errors in the logs, but as shown above, it happens when everything is successful as well. I checked the snapshots and they all look ok. I just started a restore of one of them to local disk and although it's 63G and my internet isn't great, it is actually restoring properly. It's at about 25% restored right now and all seems well. Update: it restored fine! :)
Let me know if there's any other info I can provide! 🙏
Thanks for the details I'll try to reproduce it.
What I find really strange, according to the stacktrace it happens within the creation of the k8s client we use to write the snapshot objects to the cluster.
@Kidswiss thanks for the fix! Could I please ask when this will be released in the helm chart? Kind regards! :)
@Kidswiss thanks for the fix! Could I please ask when this will be released in the helm chart? Kind regards! :)
The release pipeline is running as I'm writing this :)
Description
I always get the following in the logs of any backup pod:
[controller-runtime] log.SetLogger(...) was never called; logs will not be displayed.
The backup pods are always successful and the data is actually backed up, but they always print this trackback and I don't know why? Nothing is broken, just a bit confusing.Additional Context
I'm using s3 as my backend and Backblaze B2 as the remote s3 provider. All is working there fine. Again, none of this breaks anything. It just causes weird log messages that I don't understand. Update: I switched to Cloudflare R2 and the issue persists.
Thanks for any help you can provide and thanks for continuing to maintain this project!
Logs
Expected Behavior
No stack traces in the logs.
Steps To Reproduce
I'm using the
4.7.0
helm chart on k8s. I deploy the crds first, and then the helm chart, both via Argo CD. You can see my full values here: https://github.com/small-hack/argocd-apps/blob/730c3494444d19f8cad59184e0ba2039bcead4d6/k8up/k8up_argocd_appset.yaml#L42-L75This happens with both Backups and Schedules manifests applied manually through kubectl.
Version of K8up
v2.10.0
Version of Kubernetes
1.29.5
originally, but I recently upgraded tov1.30.2+k3s1
and still found the error.Distribution of Kubernetes
K3s