kubevirt / containerized-data-importer

Data Import Service for kubernetes, designed with kubevirt in mind.
Apache License 2.0
426 stars 267 forks source link

clone token missing, thrown on RancherLabs Harvester platform. When using KCLI for K3S deploy #1553

Closed larssb closed 3 years ago

larssb commented 3 years ago

/kind bug

What happened: I get this error:

{"level":"error","ts":1609186904.8528516,"logger":"controller","msg":"Reconciler error","controller":"clone-controller","name":"k3s-test-master-0-disk0","namespace":"longhorn-system","error":"clone token missing","errorVerbose":"clone token missing\nkubevirt.io/containerized-data-importer/pkg/controller.validateCloneToken\n\tpkg/controller/clone-controller.go:660\nkubevirt.io/containerized-data-importer/pkg/controller.(*CloneReconciler).validateSourceAndTarget\n\tpkg/controller/clone-controller.go:356\nkubevirt.io/containerized-data-importer/pkg/controller.(*CloneReconciler).reconcileSourcePod\n\tpkg/controller/clone-controller.go:229\nkubevirt.io/containerized-data-importer/pkg/controller.(*CloneReconciler).Reconcile\n\tpkg/controller/clone-controller.go:204\nkubevirt.io/containerized-data-importer/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\tvendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:235\nkubevirt.io/containerized-data-importer/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\tvendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:209\nkubevirt.io/containerized-data-importer/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\tvendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:188\nkubevirt.io/containerized-data-importer/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155\nkubevirt.io/containerized-data-importer/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156\nkubevirt.io/containerized-data-importer/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nkubevirt.io/containerized-data-importer/vendor/k8s.io/apimachinery/pkg/util/wait.Until\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90\nruntime.goexit\n\tGOROOT/src/runtime/asm_amd64.s:1357","stacktrace":"kubevirt.io/containerized-data-importer/vendor/github.com/go-logr/zapr.(*zapLogger).Error\n\tvendor/github.com/go-logr/zapr/zapr.go:128\nkubevirt.io/containerized-data-importer/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\tvendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:237\nkubevirt.io/containerized-data-importer/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\tvendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:209\nkubevirt.io/containerized-data-importer/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\tvendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:188\nkubevirt.io/containerized-data-importer/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155\nkubevirt.io/containerized-data-importer/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156\nkubevirt.io/containerized-data-importer/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nkubevirt.io/containerized-data-importer/vendor/k8s.io/apimachinery/pkg/util/wait.Until\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90"}

N.B. the error mentions the namespace longhorn-system. That's me trying that namespace. I've also tried it without specifying that namespace. Then the default namespace will be used.

When I use KCLI to create a K3S cluster on the RancherLabs Harvester platform.

My KCLI cmd is: kcli create kube k3s --paramfile k3s-test_deploy_parms.yml --force

What you expected to happen:

That KCLI is successfull in cloning the required disks for K3S master and worker nodes. Instead I find that error in the logs of the cdi-deployment... Pod.

How to reproduce it (as minimally and precisely as possible):

  1. Install Harvest, e.g. an ISO install. See this blog
  2. Install KCLI, e.g. with the container method. See here
  3. Setup KCLI config to use the kubevirt provider. Read here
  4. Execute kcli info kube k3s to get the parameters to use when deploying K3S with the cmd (kcli create kube k3s --paramfile k3s-test_deploy_parms.yml --force) mentioned above
  5. Execute: kcli create kube k3s --paramfile k3s-test_deploy_parms.yml --force

Anything else we need to know?:

Environment:


Thank you very much

awels commented 3 years ago

Sounds like an issue with harverster to me. The message indicates that a clone is failing due to a token not being valid. The most likely issue is that the token expired before harvester started the clone.

larssb commented 3 years ago

Hi @awels,

Its working. Mr. Karmab made some necessary changes in order to make this work for the KCLI tool.

So closing.