kubevirt / hyperconverged-cluster-operator

Operator pattern for managing multi-operator products
Apache License 2.0
143 stars 149 forks source link

HCO Deployment via OLM not completed due to SSP Operator CrashLoopBackOff: no matches for kind "VirtualMachine" in version "kubevirt.io/v1" #3007

Closed kgfathur closed 1 week ago

kgfathur commented 2 weeks ago

What happened: I am deploying HCO v1.10.1 via OLM: https://operatorhub.io/operator/community-kubevirt-hyperconverged. HCO Deployment via OLM on kubernetes (k3s) is failing due to SSP Operator CrashLoopBackOff with error message: no matches for kind "VirtualMachine" in version "kubevirt.io/v1"

$ kubectl logs -n operators ssp-operator-7d8b9b4c9f-rl97h
...
{"level":"error","ts":"2024-06-24T12:44:08Z","logger":"controller-runtime.source","msg":"if kind is a CRD, it should be installed before calling Start","kind":"VirtualMachine.kubevirt.io","error":"no matches for kind \"VirtualMachine\" in version \"kubevirt.io/v1\"","stacktrace":"sigs.k8s.io/controller-runtime/pkg/source.(*Kind).Start.func1.1\n\t/workspace/vendor/sigs.k8s.io/controller-runtime/pkg/source/source.go:143\nk8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext\n\t/workspace/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:154\nk8s.io/apimachinery/pkg/util/wait.waitForWithContext\n\t/workspace/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:207\nk8s.io/apimachinery/pkg/util/wait.poll\n\t/workspace/vendor/k8s.io/apimachinery/pkg/util/wait/poll.go:260\nk8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext\n\t/workspace/vendor/k8s.io/apimachinery/pkg/util/wait/poll.go:200\nsigs.k8s.io/controller-runtime/pkg/source.(*Kind).Start.func1\n\t/workspace/vendor/sigs.k8s.io/controller-runtime/pkg/source/source.go:136"}
{"level":"error","ts":"2024-06-24T12:44:18Z","msg":"Could not wait for Cache to sync","controller":"vm-controller","controllerGroup":"kubevirt.io","controllerKind":"VirtualMachine","error":"failed to wait for vm-controller caches to sync: timed out waiting for cache to be synced","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.1\n\t/workspace/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:211\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2\n\t/workspace/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:216\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start\n\t/workspace/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:242\nsigs.k8s.io/controller-runtime/pkg/manager.(*runnableGroup).reconcile.func1\n\t/workspace/vendor/sigs.k8s.io/controller-runtime/pkg/manager/runnable_group.go:219"}
{"level":"info","ts":"2024-06-24T12:44:18Z","msg":"Stopping and waiting for non leader election runnables"}
{"level":"info","ts":"2024-06-24T12:44:18Z","logger":"setup","msg":"shutting down Prometheus metrics server"}
{"level":"info","ts":"2024-06-24T12:44:18Z","msg":"Stopping and waiting for leader election runnables"}
{"level":"info","ts":"2024-06-24T12:44:18Z","msg":"Shutdown signal received, waiting for all workers to finish","controller":"service-controller","controllerGroup":"","controllerKind":"Service"}
{"level":"info","ts":"2024-06-24T12:44:18Z","msg":"Shutdown signal received, waiting for all workers to finish","controller":"ssp","controllerGroup":"ssp.kubevirt.io","controllerKind":"SSP"}
{"level":"info","ts":"2024-06-24T12:44:18Z","msg":"All workers finished","controller":"ssp","controllerGroup":"ssp.kubevirt.io","controllerKind":"SSP"}
{"level":"info","ts":"2024-06-24T12:44:18Z","msg":"All workers finished","controller":"service-controller","controllerGroup":"","controllerKind":"Service"}
{"level":"info","ts":"2024-06-24T12:44:18Z","msg":"Stopping and waiting for caches"}
{"level":"info","ts":"2024-06-24T12:44:18Z","msg":"Stopping and waiting for webhooks"}
{"level":"info","ts":"2024-06-24T12:44:18Z","logger":"controller-runtime.webhook","msg":"shutting down webhook server"}
{"level":"info","ts":"2024-06-24T12:44:18Z","msg":"Wait completed, proceeding to shutdown the manager"}
{"level":"error","ts":"2024-06-24T12:44:18Z","msg":"problem running manager","error":"failed to wait for vm-controller caches to sync: timed out waiting for cache to be synced","stacktrace":"kubevirt.io/ssp-operator/controllers.CreateAndStartReconciler\n\t/workspace/controllers/setup.go:42\nmain.main\n\t/workspace/main.go:273\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:250"}
{"level":"error","ts":"2024-06-24T12:44:18Z","logger":"setup","msg":"unable to create or start controller","controller":"SSP","error":"failed to wait for vm-controller caches to sync: timed out waiting for cache to be synced","stacktrace":"main.main\n\t/workspace/main.go:274\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:250"}

I see some other related issues:

2129

https://github.com/kubevirt/ssp-operator/pull/437

What you expected to happen: Successfully deploy HCO via OLM on plain kubernetes (k3s)

How to reproduce it (as minimally and precisely as possible): Install the OLM:

curl -sfL https://github.com/operator-framework/operator-lifecycle-manager/releases/download/v0.28.0/install.sh -o install.sh
chmod +x install.sh
./install.sh v0.28.0

$ kubectl get pod -n olm
NAME                                                              READY   STATUS      RESTARTS   AGE
9d43648572067970b101a79436994f36a334198b0f0b32f4ae11551bd44z9pz   0/1     Completed   0          54m
catalog-operator-6fbb6bd9bb-b6qcb                                 1/1     Running     0          62m
olm-operator-574697c48b-gg8z9                                     1/1     Running     0          62m
operatorhubio-catalog-tbfsd                                       1/1     Running     0          59m
packageserver-866b46c5c7-9gsxm                                    1/1     Running     0          59m
packageserver-866b46c5c7-nz8g6                                    1/1     Running     0          59m

Deploy kubevirt-hyperconverged:

curl -sfL https://operatorhub.io/install/community-kubevirt-hyperconverged.yaml -o community-kubevirt-hyperconverged.yaml
vim community-kubevirt-hyperconverged.yaml # just edit the name
kubectl create -f community-kubevirt-hyperconverged.yaml

file community-kubevirt-hyperconverged.yaml (just edit the name):

apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: kubevirt-hyperconverged
  namespace: operators
spec:
  channel: stable
  name: community-kubevirt-hyperconverged
  source: operatorhubio-catalog
  sourceNamespace: olm
$ kubectl get pod -n operators
NAME                                                  READY   STATUS             RESTARTS      AGE
cdi-operator-7d9ff47dcc-t9ms6                         1/1     Running            0             38m
cluster-network-addons-operator-5b54878c48-mr89l      2/2     Running            0             37m
hco-operator-7fd5bd9fcb-lgbxz                         1/1     Running            0             45m
hco-webhook-6c4f88557f-rgqdc                          1/1     Running            0             45m
hostpath-provisioner-operator-54dcb9dd45-8x5q9        1/1     Running            0             38m
hyperconverged-cluster-cli-download-5dfbc88f5-7jbdt   1/1     Running            0             18m
mtq-operator-56d644c994-nrgx4                         1/1     Running            0             18m
ssp-operator-7d8b9b4c9f-rl97h                         1/1     CrashLoopBackOff   2 (26s ago)   6m40s
virt-operator-768f87488d-m45hh                        1/1     Running            0             38m
virt-operator-768f87488d-wq4z8                        1/1     Running            0             38m

Additional context: I have successfully deployed and testing manual deployment of KubeVirt & CDI Operator (Not HCO) on the same kubernetes k3s cluster & architecture. The Kubernetes k3s cluster already re-deployed (to make sure it's clean without previous kubevirt resources/crds)

Environment:

tiraboschi commented 2 weeks ago

@kgfathur is this still happening once you create the CR for the HCO operator?

kgfathur commented 2 weeks ago

@kgfathur is this still happening once you create the CR for the HCO operator?

it seems that I missed some steps. I don't have it on my cluster:

$ kubectl get hco -A
No resources found

Do you mean the CR for the HyperConverged, like on this example hco.cr.yaml

Currently, before creating the CR for the HCO, the ssp-operator deployment not failing to CrashLoopBackOff.

$ kubectl get pod -n operators
NAME                                                  READY   STATUS    RESTARTS       AGE
cdi-operator-7d9ff47dcc-t9ms6                         1/1     Running   0              3h3m
cluster-network-addons-operator-5b54878c48-mr89l      2/2     Running   0              3h1m
hco-operator-7fd5bd9fcb-lgbxz                         1/1     Running   0              3h9m
hco-webhook-6c4f88557f-rgqdc                          1/1     Running   0              3h9m
hostpath-provisioner-operator-54dcb9dd45-8x5q9        1/1     Running   0              3h3m
hyperconverged-cluster-cli-download-5dfbc88f5-7jbdt   1/1     Running   0              152m
mtq-operator-56d644c994-nrgx4                         1/1     Running   0              3h3m
ssp-operator-7d8b9b4c9f-kxs2s                         1/1     Running   1 (118m ago)   121m
virt-operator-768f87488d-m45hh                        1/1     Running   0              3h3m
virt-operator-768f87488d-wq4z8                        1/1     Running   0              3h3m

$ kubectl get csv -n operators
NAME                                       DISPLAY                                    VERSION   REPLACES                                   PHASE
kubevirt-hyperconverged-operator.v1.10.1   KubeVirt HyperConverged Cluster Operator   1.10.1    kubevirt-hyperconverged-operator.v1.10.0   Succeeded

However, the ssp-operator deployment still showing error log for no matches for kind "VirtualMachine" in version "kubevirt.io/v1". Is it because I haven't created CR for the HCO as well as the KubeVirt resource in my cluster?

tiraboschi commented 2 weeks ago

Do you mean the CR for the HyperConverged, like on this example hco.cr.yaml

Yes, please. The Kubevirt operator will create the VirtualMachine CRD only at that step and that specific version of the SSP operator is going to crash until the the VirtualMachine CRD is there.

kgfathur commented 2 weeks ago

Thanks @tiraboschi for your help!

I try to rebuild my cluster :D then repeat the previous steps. Currently, I don't understand all of the config parameters from sample manifest hco.cr.yaml, so I am creating simple CR for the HCO:

apiVersion: v1
kind: Namespace
metadata:
  name: kubevirt-hyperconverged
---
apiVersion: hco.kubevirt.io/v1beta1
kind: HyperConverged
metadata:
  name: kubevirt-hyperconverged
  namespace: kubevirt-hyperconverged
spec: {}

I check the kubevirt, cdi, and networkaddonsconfigs already deployed.

$ kubectl get hco -A
NAMESPACE                 NAME                      AGE
kubevirt-hyperconverged   kubevirt-hyperconverged   7m51s

$ kubectl get kubevirts.kubevirt.io -A
NAMESPACE                 NAME                               AGE     PHASE
kubevirt-hyperconverged   kubevirt-kubevirt-hyperconverged   7m56s

$ kubectl get cdi -A
NAME                          AGE    PHASE
cdi-kubevirt-hyperconverged   8m2s   Deployed

$ kubectl get networkaddonsconfigs.networkaddonsoperator.network.kubevirt.io
NAME      AGE
cluster   8m15s

In the operators namespace there are additional pod/deployment.

$ kubectl get pod -n operators
NAME                                                  READY   STATUS             RESTARTS     AGE
cdi-apiserver-7c6db74f64-w9zgk                        1/1     Running            0            7m41s
cdi-deployment-6b7ff46f6c-sg7pd                       1/1     Running            0            7m41s
cdi-operator-7d9ff47dcc-vjk56                         1/1     Running            0            12m
cdi-uploadproxy-865dd5c74c-sd7jv                      1/1     Running            0            7m40s
cluster-network-addons-operator-5b54878c48-gtkvb      2/2     Running            0            12m
hco-operator-7fd5bd9fcb-zkkl7                         1/1     Running            0            12m
hco-webhook-b566bf99d-hth7x                           1/1     Running            0            12m
hostpath-provisioner-operator-559ddbf7c4-vrp75        1/1     Running            0            12m
hyperconverged-cluster-cli-download-5dfbc88f5-jpt87   1/1     Running            0            12m
kubemacpool-cert-manager-7c4b6fd4d-lfwvk              1/1     Running            0            7m40s
kubemacpool-mac-controller-manager-6cf8d5564c-5fnnm   2/2     Running            0            7m40s
mtq-operator-56d644c994-wv96p                         1/1     Running            0            12m
ssp-operator-857d5dd7d8-8pwj5                         0/1     CrashLoopBackOff   4 (9s ago)   12m
virt-operator-768f87488d-lk29r                        1/1     Running            0            12m
virt-operator-768f87488d-wg9gd                        1/1     Running            0            12m

However, there's nothing created on kubevirt-hyperconverged namespace:

$ kubectl get all,secret,cm,sa -n kubevirt-hyperconverged
NAME                         DATA   AGE
configmap/kube-root-ca.crt   1      42m

NAME                     SECRETS   AGE
serviceaccount/default   0         42m

the VirtualMachine's CRD also still not exists:

$ kubectl get crd | grep -i virt
cdiconfigs.cdi.kubevirt.io                                       2024-06-24T16:09:30Z
cdis.cdi.kubevirt.io                                             2024-06-24T16:04:07Z
dataimportcrons.cdi.kubevirt.io                                  2024-06-24T16:09:30Z
datasources.cdi.kubevirt.io                                      2024-06-24T16:09:30Z
datavolumes.cdi.kubevirt.io                                      2024-06-24T16:09:30Z
hostpathprovisioners.hostpathprovisioner.kubevirt.io             2024-06-24T16:04:07Z
hyperconvergeds.hco.kubevirt.io                                  2024-06-24T16:04:07Z
kubevirts.kubevirt.io                                            2024-06-24T16:04:07Z
mtqs.mtq.kubevirt.io                                             2024-06-24T16:04:07Z
networkaddonsconfigs.networkaddonsoperator.network.kubevirt.io   2024-06-24T16:04:07Z
objecttransfers.cdi.kubevirt.io                                  2024-06-24T16:09:30Z
ssps.ssp.kubevirt.io                                             2024-06-24T16:04:07Z
storageprofiles.cdi.kubevirt.io                                  2024-06-24T16:09:30Z
volumeclonesources.cdi.kubevirt.io                               2024-06-24T16:09:30Z
volumeimportsources.cdi.kubevirt.io                              2024-06-24T16:09:30Z
volumeuploadsources.cdi.kubevirt.io                              2024-06-24T16:09:30Z

$ kubectl api-resources | grep -i kubevirt
cdiconfigs                             cdi.kubevirt.io/v1beta1                        false    CDIConfig
cdis                     cdi,cdis      cdi.kubevirt.io/v1beta1                        false    CDI
dataimportcrons          dic,dics      cdi.kubevirt.io/v1beta1                        true     DataImportCron
datasources              das           cdi.kubevirt.io/v1beta1                        true     DataSource
datavolumes              dv,dvs        cdi.kubevirt.io/v1beta1                        true     DataVolume
objecttransfers          ot,ots        cdi.kubevirt.io/v1beta1                        false    ObjectTransfer
storageprofiles                        cdi.kubevirt.io/v1beta1                        false    StorageProfile
volumeclonesources                     cdi.kubevirt.io/v1beta1                        true     VolumeCloneSource
volumeimportsources                    cdi.kubevirt.io/v1beta1                        true     VolumeImportSource
volumeuploadsources                    cdi.kubevirt.io/v1beta1                        true     VolumeUploadSource
hyperconvergeds          hco,hcos      hco.kubevirt.io/v1beta1                        true     HyperConverged
hostpathprovisioners                   hostpathprovisioner.kubevirt.io/v1beta1        false    HostPathProvisioner
kubevirts                kv,kvs        kubevirt.io/v1                                 true     KubeVirt
mtqs                     mtq,mtqs      mtq.kubevirt.io/v1alpha1                       false    MTQ
networkaddonsconfigs                   networkaddonsoperator.network.kubevirt.io/v1   false    NetworkAddonsConfig
ssps                                   ssp.kubevirt.io/v1beta2                        true     SSP
uploadtokenrequests      utr,utrs      upload.cdi.kubevirt.io/v1beta1                 true     UploadTokenRequest

And as expected, the SSP Operator still CrashLoopBackOff:

{"level":"error","ts":"2024-06-24T16:13:54Z","logger":"controller-runtime.source","msg":"if kind is a CRD, it should be installed before calling Start","kind":"VirtualMachine.kubevirt.io","error":"no matches for kind \"VirtualMachine\" in version \"kubevirt.io/v1\"","stacktrace":"sigs.k8s.io/controller-runtime/pkg/source.(*Kind).Start.func1.1\n\t/workspace/vendor/sigs.k8s.io/controller-runtime/pkg/source/source.go:143\nk8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext\n\t/workspace/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:154\nk8s.io/apimachinery/pkg/util/wait.waitForWithContext\n\t/workspace/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:207\nk8s.io/apimachinery/pkg/util/wait.poll\n\t/workspace/vendor/k8s.io/apimachinery/pkg/util/wait/poll.go:260\nk8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext\n\t/workspace/vendor/k8s.io/apimachinery/pkg/util/wait/poll.go:200\nsigs.k8s.io/controller-runtime/pkg/source.(*Kind).Start.func1\n\t/workspace/vendor/sigs.k8s.io/controller-runtime/pkg/source/source.go:136"}
{"level":"error","ts":"2024-06-24T16:14:04Z","msg":"Could not wait for Cache to sync","controller":"vm-controller","controllerGroup":"kubevirt.io","controllerKind":"VirtualMachine","error":"failed to wait for vm-controller caches to sync: timed out waiting for cache to be synced","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.1\n\t/workspace/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:211\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2\n\t/workspace/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:216\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start\n\t/workspace/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:242\nsigs.k8s.io/controller-runtime/pkg/manager.(*runnableGroup).reconcile.func1\n\t/workspace/vendor/sigs.k8s.io/controller-runtime/pkg/manager/runnable_group.go:219"}
{"level":"info","ts":"2024-06-24T16:14:04Z","msg":"Stopping and waiting for non leader election runnables"}
{"level":"info","ts":"2024-06-24T16:14:04Z","logger":"setup","msg":"shutting down Prometheus metrics server"}
{"level":"info","ts":"2024-06-24T16:14:04Z","msg":"Stopping and waiting for leader election runnables"}
{"level":"info","ts":"2024-06-24T16:14:04Z","msg":"Shutdown signal received, waiting for all workers to finish","controller":"ssp","controllerGroup":"ssp.kubevirt.io","controllerKind":"SSP"}
{"level":"info","ts":"2024-06-24T16:14:04Z","msg":"All workers finished","controller":"ssp","controllerGroup":"ssp.kubevirt.io","controllerKind":"SSP"}
{"level":"info","ts":"2024-06-24T16:14:04Z","msg":"Shutdown signal received, waiting for all workers to finish","controller":"service-controller","controllerGroup":"","controllerKind":"Service"}
{"level":"info","ts":"2024-06-24T16:14:04Z","msg":"All workers finished","controller":"service-controller","controllerGroup":"","controllerKind":"Service"}
{"level":"info","ts":"2024-06-24T16:14:04Z","msg":"Stopping and waiting for caches"}
{"level":"info","ts":"2024-06-24T16:14:04Z","msg":"Stopping and waiting for webhooks"}
{"level":"info","ts":"2024-06-24T16:14:04Z","logger":"controller-runtime.webhook","msg":"shutting down webhook server"}
{"level":"info","ts":"2024-06-24T16:14:04Z","msg":"Wait completed, proceeding to shutdown the manager"}
{"level":"error","ts":"2024-06-24T16:14:04Z","msg":"problem running manager","error":"failed to wait for vm-controller caches to sync: timed out waiting for cache to be synced","stacktrace":"kubevirt.io/ssp-operator/controllers.CreateAndStartReconciler\n\t/workspace/controllers/setup.go:42\nmain.main\n\t/workspace/main.go:273\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:250"}
{"level":"error","ts":"2024-06-24T16:14:04Z","logger":"setup","msg":"unable to create or start controller","controller":"SSP","error":"failed to wait for vm-controller caches to sync: timed out waiting for cache to be synced","stacktrace":"main.main\n\t/workspace/main.go:274\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:250"}

I will try to re-create the HCO using the entire sample manifest from hco.cr.yaml Is there any mandatory configs/params on the spec of the HCO's CR? (maybe my previous HCO CR missing mandatory configs, since it just empty/default spec?); or Is there any additional step needed?

kgfathur commented 2 weeks ago

Update: Applying the hco.cr.yaml from the main branch is not applicable on my deployment.

$ kubectl create -f https://raw.githubusercontent.com/kubevirt/hyperconverged-cluster-operator/main/deploy/hco.cr.yaml
Error from server (BadRequest): error when creating "manifests/kubevirt/hco-cr-sample.yaml": HyperConverged in version "v1beta1" cannot be handled as a HyperConverged: strict decoding error: unknown field "spec.featureGates.alignCPUs", unknown field "spec.featureGates.autoResourceLimits", unknown field "spec.featureGates.downwardMetrics", unknown field "spec.featureGates.enableApplicationAwareQuota", unknown field "spec.higherWorkloadDensity", unknown field "spec.virtualMachineOptions.disableSerialConsoleLog"

I think it because the stable channel still version v1.10.1 (Dec 31, 2023)

Trying with hco.cr.yaml v1.10.1

$ kubectl create -f https://raw.githubusercontent.com/kubevirt/hyperconverged-cluster-operator/v1.10.1/deploy/hco.cr.yaml
hyperconverged.hco.kubevirt.io/kubevirt-hyperconverged created

$ kubectl get csv -n operators
NAME                                       DISPLAY                                    VERSION   REPLACES                                   PHASE
kubevirt-hyperconverged-operator.v1.10.1   KubeVirt HyperConverged Cluster Operator   1.10.1    kubevirt-hyperconverged-operator.v1.10.0   Succeeded

$ kubectl get hco -A
NAMESPACE                 NAME                      AGE
kubevirt-hyperconverged   kubevirt-hyperconverged   2m3s

$ kubectl get cdi -A
NAME                          AGE    PHASE
cdi-kubevirt-hyperconverged   2m8s   Deployed

$ kubectl get networkaddonsconfigs -A
NAME      AGE
cluster   2m15s

$ kubectl get all,secret,cm,sa -n kubevirt-hyperconverged
NAME                         DATA   AGE
configmap/kube-root-ca.crt   1      12m

NAME                     SECRETS   AGE
serviceaccount/default   0         12m

Now, the SSP Operator is not CrashLoopBackOff:

$ kubectl get pod -n operators
NAME                                                  READY   STATUS    RESTARTS   AGE
cdi-apiserver-7c6db74f64-p59bm                        1/1     Running   0          7m30s
cdi-deployment-6b7ff46f6c-lb4lg                       1/1     Running   0          7m30s
cdi-operator-7d9ff47dcc-st428                         1/1     Running   0          11m
cdi-uploadproxy-865dd5c74c-9g7rp                      1/1     Running   0          7m30s
cluster-network-addons-operator-5b54878c48-8wcsg      2/2     Running   0          11m
hco-operator-7fd5bd9fcb-29m65                         1/1     Running   0          11m
hco-webhook-68c8d7b486-pvzjs                          1/1     Running   0          11m
hostpath-provisioner-operator-686c9c6954-7tzmv        1/1     Running   0          11m
hyperconverged-cluster-cli-download-5dfbc88f5-qtlgw   1/1     Running   0          11m
kubemacpool-cert-manager-7c4b6fd4d-2svmz              1/1     Running   0          7m30s
kubemacpool-mac-controller-manager-6cf8d5564c-xgmxc   2/2     Running   0          7m30s
mtq-operator-56d644c994-s4r6z                         1/1     Running   0          11m
ssp-operator-765d67b668-qtw2m                         1/1     Running   0          11m
virt-operator-768f87488d-5n5pg                        1/1     Running   0          11m
virt-operator-768f87488d-xr27x                        1/1     Running   0          11m

However, the VirtualMachine's CRD still not exists:

$ kubectl get crd | grep -i virt
cdiconfigs.cdi.kubevirt.io                                       2024-06-24T17:43:04Z
cdis.cdi.kubevirt.io                                             2024-06-24T17:39:07Z
dataimportcrons.cdi.kubevirt.io                                  2024-06-24T17:43:04Z
datasources.cdi.kubevirt.io                                      2024-06-24T17:43:04Z
datavolumes.cdi.kubevirt.io                                      2024-06-24T17:43:04Z
hostpathprovisioners.hostpathprovisioner.kubevirt.io             2024-06-24T17:39:08Z
hyperconvergeds.hco.kubevirt.io                                  2024-06-24T17:39:08Z
kubevirts.kubevirt.io                                            2024-06-24T17:39:08Z
mtqs.mtq.kubevirt.io                                             2024-06-24T17:39:08Z
networkaddonsconfigs.networkaddonsoperator.network.kubevirt.io   2024-06-24T17:39:07Z
objecttransfers.cdi.kubevirt.io                                  2024-06-24T17:43:04Z
ssps.ssp.kubevirt.io                                             2024-06-24T17:39:08Z
storageprofiles.cdi.kubevirt.io                                  2024-06-24T17:43:04Z
volumeclonesources.cdi.kubevirt.io                               2024-06-24T17:43:04Z
volumeimportsources.cdi.kubevirt.io                              2024-06-24T17:43:04Z
volumeuploadsources.cdi.kubevirt.io                              2024-06-24T17:43:04Z
$ kubectl logs -n operators ssp-operator-765d67b668-qtw2m
...
{"level":"error","ts":"2024-06-24T17:49:46Z","logger":"controller-runtime.source","msg":"if kind is a CRD, it should be installed before calling Start","kind":"VirtualMachine.kubevirt.io","error":"no matches for kind \"VirtualMachine\" in version \"kubevirt.io/v1\"","stacktrace":"sigs.k8s.io/controller-runtime/pkg/source.(*Kind).Start.func1.1\n\t/workspace/vendor/sigs.k8s.io/controller-runtime/pkg/source/source.go:143\nk8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext\n\t/workspace/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:154\nk8s.io/apimachinery/pkg/util/wait.waitForWithContext\n\t/workspace/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:207\nk8s.io/apimachinery/pkg/util/wait.poll\n\t/workspace/vendor/k8s.io/apimachinery/pkg/util/wait/poll.go:260\nk8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext\n\t/workspace/vendor/k8s.io/apimachinery/pkg/util/wait/poll.go:200\nsigs.k8s.io/controller-runtime/pkg/source.(*Kind).Start.func1\n\t/workspace/vendor/sigs.k8s.io/controller-runtime/pkg/source/source.go:136"}
{"level":"error","ts":"2024-06-24T17:49:56Z","logger":"controller-runtime.source","msg":"if kind is a CRD, it should be installed before calling Start","kind":"VirtualMachine.kubevirt.io","error":"no matches for kind \"VirtualMachine\" in version \"kubevirt.io/v1\"","stacktrace":"sigs.k8s.io/controller-runtime/pkg/source.(*Kind).Start.func1.1\n\t/workspace/vendor/sigs.k8s.io/controller-runtime/pkg/source/source.go:143\nk8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext\n\t/workspace/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:154\nk8s.io/apimachinery/pkg/util/wait.waitForWithContext\n\t/workspace/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:207\nk8s.io/apimachinery/pkg/util/wait.poll\n\t/workspace/vendor/k8s.io/apimachinery/pkg/util/wait/poll.go:260\nk8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext\n\t/workspace/vendor/k8s.io/apimachinery/pkg/util/wait/poll.go:200\nsigs.k8s.io/controller-runtime/pkg/source.(*Kind).Start.func1\n\t/workspace/vendor/sigs.k8s.io/controller-runtime/pkg/source/source.go:136"}
nunnatsa commented 2 weeks ago

Oh, 1.10.1? this is solved - kind of. See #2954

Please install version 1.11.0, or if not available, 1.10.7

If this is working for you, please let us know. Then we'll close this issue as duplication of #2954, and keep monitor it there.

We still need to fix the catalog to stop suggesting the broken versions.

kgfathur commented 2 weeks ago

After trying version v1.11.0, the ssp-operator working as expected:

$ kubectl create -n kubevirt-hyperconverged -f https://raw.githubusercontent.com/kubevirt/hyperconverged-cluster-operator/v1.10.1/deploy/hco.cr.yaml
hyperconverged.hco.kubevirt.io/kubevirt-hyperconverged created

$ kubectl get csv -n operators
NAME                                       DISPLAY                                    VERSION   REPLACES   PHASE
kubevirt-hyperconverged-operator.v1.11.0   KubeVirt HyperConverged Cluster Operator   1.11.0               Succeeded

$ kubectl get kubevirts -n kubevirt-hyperconverged
NAME                               AGE   PHASE
kubevirt-kubevirt-hyperconverged   12m

$ kubectl get cdi -n kubevirt-hyperconverged
NAME                          AGE   PHASE
cdi-kubevirt-hyperconverged   12m   Deployed

$ kubectl get networkaddonsconfigs
NAME      AGE
cluster   12m

$ kubectl get pod -n operators
NAME                                                  READY   STATUS    RESTARTS   AGE
aaq-operator-6976859798-2lvbg                         1/1     Running   0          43m
cdi-apiserver-64d85bb898-6qkl8                        1/1     Running   0          12m
cdi-deployment-78c94b68dc-kxbjj                       1/1     Running   0          12m
cdi-operator-5b67c9967b-6x829                         1/1     Running   0          43m
cdi-uploadproxy-7779ddfc6b-qd4xt                      1/1     Running   0          12m
cluster-network-addons-operator-6c5496f456-6dtw5      2/2     Running   0          65m
hco-operator-68b595d765-p469d                         1/1     Running   0          65m
hco-webhook-6c457857cb-sbhxt                          1/1     Running   0          65m
hostpath-provisioner-operator-5bc97b8cf5-lhzs6        1/1     Running   0          43m
hyperconverged-cluster-cli-download-fcc89b7d6-kvvdr   1/1     Running   0          65m
kubemacpool-cert-manager-5d7967c84-9n2vr              1/1     Running   0          12m
kubemacpool-mac-controller-manager-7855d88fc7-nb9l4   2/2     Running   0          12m
mtq-operator-79fd98b4b5-wx789                         1/1     Running   0          43m
ssp-operator-5c9f855c8-7z7t8                          1/1     Running   0          65m
virt-operator-66d6b95fd9-56djg                        1/1     Running   0          43m
virt-operator-66d6b95fd9-cnlc5                        1/1     Running   0          43m

$ kubectl get all,secret,svc,cm,sa -n kubevirt-hyperconverged
NAME                                                     AGE
hyperconverged.hco.kubevirt.io/kubevirt-hyperconverged   22m

NAME                                                    AGE   PHASE
kubevirt.kubevirt.io/kubevirt-kubevirt-hyperconverged   22m

NAME                         DATA   AGE
configmap/kube-root-ca.crt   1      23m

NAME                     SECRETS   AGE
serviceaccount/default   0         23m

There's no more error related no matches for kind "VirtualMachine" in version "kubevirt.io/v1" on ssp-operator's logs:

$ kubectl logs -n operators ssp-operator-5c9f855c8-7z7t8
...
{"level":"info","ts":"2024-06-25T06:38:01Z","msg":"Starting service-controller"}
{"level":"info","ts":"2024-06-25T06:38:01Z","msg":"Starting EventSource","controller":"ssp","controllerGroup":"ssp.kubevirt.io","controllerKind":"SSP","source":"kind source: *v1beta2.SSP"}
{"level":"info","ts":"2024-06-25T06:38:01Z","msg":"Starting EventSource","controller":"ssp","controllerGroup":"ssp.kubevirt.io","controllerKind":"SSP","source":"kind source: *v1.RoleBinding"}
{"level":"info","ts":"2024-06-25T06:38:01Z","msg":"Starting EventSource","controller":"ssp","controllerGroup":"ssp.kubevirt.io","controllerKind":"SSP","source":"kind source: *v1.ClusterRole"}
{"level":"info","ts":"2024-06-25T06:38:01Z","msg":"Starting EventSource","controller":"ssp","controllerGroup":"ssp.kubevirt.io","controllerKind":"SSP","source":"kind source: *v1.Role"}
{"level":"info","ts":"2024-06-25T06:38:01Z","msg":"Starting EventSource","controller":"ssp","controllerGroup":"ssp.kubevirt.io","controllerKind":"SSP","source":"kind source: *v1.Namespace"}
{"level":"info","ts":"2024-06-25T06:38:01Z","msg":"Starting EventSource","controller":"ssp","controllerGroup":"ssp.kubevirt.io","controllerKind":"SSP","source":"kind source: *v1.ServiceAccount"}
{"level":"info","ts":"2024-06-25T06:38:01Z","msg":"Starting EventSource","controller":"ssp","controllerGroup":"ssp.kubevirt.io","controllerKind":"SSP","source":"kind source: *v1.ConfigMap"}
{"level":"info","ts":"2024-06-25T06:38:01Z","msg":"Starting Controller","controller":"ssp","controllerGroup":"ssp.kubevirt.io","controllerKind":"SSP"}
{"level":"info","ts":"2024-06-25T06:38:01Z","msg":"service-controller started"}
{"level":"info","ts":"2024-06-25T06:38:01Z","msg":"Starting EventSource","controller":"service-controller","controllerGroup":"","controllerKind":"Service","source":"kind source: *v1.Service"}
{"level":"info","ts":"2024-06-25T06:38:01Z","msg":"Starting Controller","controller":"service-controller","controllerGroup":"","controllerKind":"Service"}
{"level":"info","ts":"2024-06-25T06:38:01Z","msg":"Starting workers","controller":"ssp","controllerGroup":"ssp.kubevirt.io","controllerKind":"SSP","worker count":1}
{"level":"info","ts":"2024-06-25T06:38:01Z","msg":"Starting workers","controller":"service-controller","controllerGroup":"","controllerKind":"Service","worker count":1}
{"level":"info","ts":"2024-06-25T06:38:01Z","logger":"controllers.Resources","msg":"Starting service reconciliation...","request":"operators/ssp-operator-metrics"}

However, the CRDs related to VirtualMachine is not created yet.

$ kubectl get crd | grep -i virt
aaqs.aaq.kubevirt.io                                             2024-06-25T06:21:07Z
cdiconfigs.cdi.kubevirt.io                                       2024-06-25T07:13:35Z
cdis.cdi.kubevirt.io                                             2024-06-25T06:21:07Z
dataimportcrons.cdi.kubevirt.io                                  2024-06-25T07:13:35Z
datasources.cdi.kubevirt.io                                      2024-06-25T07:13:35Z
datavolumes.cdi.kubevirt.io                                      2024-06-25T07:13:35Z
hostpathprovisioners.hostpathprovisioner.kubevirt.io             2024-06-25T06:21:07Z
hyperconvergeds.hco.kubevirt.io                                  2024-06-25T06:21:07Z
kubevirts.kubevirt.io                                            2024-06-25T06:21:07Z
mtqs.mtq.kubevirt.io                                             2024-06-25T06:21:07Z
networkaddonsconfigs.networkaddonsoperator.network.kubevirt.io   2024-06-25T06:21:07Z
objecttransfers.cdi.kubevirt.io                                  2024-06-25T07:13:35Z
ssps.ssp.kubevirt.io                                             2024-06-25T06:21:07Z
storageprofiles.cdi.kubevirt.io                                  2024-06-25T07:13:35Z
volumeclonesources.cdi.kubevirt.io                               2024-06-25T07:13:35Z
volumeimportsources.cdi.kubevirt.io                              2024-06-25T07:13:35Z
volumeuploadsources.cdi.kubevirt.io                              2024-06-25T07:13:35Z

Is there any additional steps to create the VirtualMachine CRDs?

nunnatsa commented 2 weeks ago

virt-api should deploy the CRD, but I can't see virt pods (other than the virt-operator). Could you please check the virt-controller log? to check if there the virt-api deployment was created?

kgfathur commented 1 week ago

virt-api should deploy the CRD, but I can't see virt pods (other than the virt-operator). Could you please check the virt-controller log? to check if there the virt-api deployment was created?

I cannot find the virt-controller component, maybe it's not created yet. Do you mean the virt-operator? because I see some clue from the log bellow.

I deploy the hco.cr.yaml (v1.11.0) from the default sample. Since it's error when installing it on another namespace except the kubevirt-hyperconverged namespace.

$ kubectl create -f https://raw.githubusercontent.com/kubevirt/hyperconverged-cluster-operator/v1.10.1/deploy/hco.cr.yaml
Error from server (Forbidden): error when creating "manifests/kubevirt/hco.cr.v1.11.0.yaml": admission webhook "validate-hco.kubevirt.io" denied the request: invalid namespace for v1beta1.HyperConverged - please use the kubevirt-hyperconverged namespace

So, I deploy it on the kubevirt-hyperconverged namespace.

$ kubectl create -n kubevirt-hyperconverged -f https://raw.githubusercontent.com/kubevirt/hyperconverged-cluster-operator/v1.10.1/deploy/hco.cr.yaml
hyperconverged.hco.kubevirt.io/kubevirt-hyperconverged created

However, there's no pod/deployment at all in the kubevirt-hyperconverged namespace (maybe it's not created yet).

$ kubectl get all,secret,svc,cm,sa -n kubevirt-hyperconverged
NAME                                                     AGE
hyperconverged.hco.kubevirt.io/kubevirt-hyperconverged   45h

NAME                                                    AGE   PHASE
kubevirt.kubevirt.io/kubevirt-kubevirt-hyperconverged   45h

NAME                         DATA   AGE
configmap/kube-root-ca.crt   1      45h

NAME                     SECRETS   AGE
serviceaccount/default   0         45h

I install the HCO operator using the default sample manifests for the deployment via OLM from operatorhub.io.

curl -sfL https://operatorhub.io/install/community-kubevirt-hyperconverged.yaml -o community-kubevirt-hyperconverged.yaml
vim community-kubevirt-hyperconverged.yaml # just edit the name and the channel
kubectl create -f community-kubevirt-hyperconverged.yaml
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: kubevirt-hyperconverged
  namespace: operators
spec:
  channel: 1.11.0
  name: community-kubevirt-hyperconverged
  source: operatorhubio-catalog
  sourceNamespace: olm

I just edit the channel to 1.11.0 as suggested before. It seems some of the components are deployed on the operators namespace, because from the sample manifest, the subscription created on namespace: operators.

$ kubectl get pod -n operators
NAME                                                  READY   STATUS    RESTARTS   AGE
aaq-operator-6976859798-2lvbg                         1/1     Running   0          45h
cdi-apiserver-64d85bb898-6qkl8                        1/1     Running   0          45h
cdi-deployment-78c94b68dc-kxbjj                       1/1     Running   0          45h
cdi-operator-5b67c9967b-6x829                         1/1     Running   0          45h
cdi-uploadproxy-7779ddfc6b-qd4xt                      1/1     Running   0          45h
cluster-network-addons-operator-6c5496f456-6dtw5      2/2     Running   0          46h
hco-operator-68b595d765-p469d                         1/1     Running   0          46h
hco-webhook-6c457857cb-sbhxt                          1/1     Running   0          46h
hostpath-provisioner-operator-5bc97b8cf5-lhzs6        1/1     Running   0          45h
hyperconverged-cluster-cli-download-fcc89b7d6-kvvdr   1/1     Running   0          46h
kubemacpool-cert-manager-5d7967c84-9n2vr              1/1     Running   0          45h
kubemacpool-mac-controller-manager-7855d88fc7-nb9l4   2/2     Running   0          45h
mtq-operator-79fd98b4b5-wx789                         1/1     Running   0          45h
ssp-operator-5c9f855c8-7z7t8                          1/1     Running   0          46h
virt-operator-66d6b95fd9-56djg                        1/1     Running   0          45h
virt-operator-66d6b95fd9-cnlc5                        1/1     Running   0          45h

Then checking for the virt-operator logs found a clue KubeVirt CR is created in another namespace than the operator, that is not supported.

$ kubectl logs -n operators virt-operator-66d6b95fd9-56djg
...
{"component":"virt-operator","kind":"","level":"info","msg":"Handling KubeVirt resource","name":"kubevirt-kubevirt-hyperconverged","namespace":"kubevirt-hyperconverged","pos":"kubevirt.go:727","timestamp":"2024-06-26T21:21:28.721859Z","uid":"0a563009-2e46-4027-b3f6-46978c6a2832"}
{"component":"virt-operator","level":"error","msg":"Will ignore the install request until the situation is resolved.","pos":"kubevirt.go:912","reason":"KubeVirt CR is created in another namespace than the operator, that is not supported.","timestamp":"2024-06-26T21:21:28.722153Z"}

I don't know is the title of the issue still relevan. Should I create new issue or just rename the issue title?

nunnatsa commented 1 week ago

The whole installation - operators and CR, must be on the kubevirt-hyperconverged namespace.

nunnatsa commented 1 week ago

@tiraboschi - the operatorhub installation guide, says to install with

$ kubectl create -f https://operatorhub.io/install/stable/community-kubevirt-hyperconverged.yaml

the content of the file is:

apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: my-community-kubevirt-hyperconverged
  namespace: operators # << ======= WRONG NS
spec:
  channel: stable
  name: community-kubevirt-hyperconverged
  source: operatorhubio-catalog
  sourceNamespace: olm

Do we have any control over this file?

kgfathur commented 1 week ago

The whole installation - operators and CR, must be on the kubevirt-hyperconverged namespace.

thanks @nunnatsa for the information. I see some docs on this repo and find that it should be there, on the kubevirt-hyperconverged namespace.

The installation guide from operatorhub.io and artifacthub.io show the wrong namespace.

nunnatsa commented 1 week ago

@tiraboschi - I think maybe the issue is here? https://github.com/kubevirt/hyperconverged-cluster-operator/blob/6bc3cbc651fadca3f28ed5ec3aeaa7ddcd33492c/pkg/components/components.go#L863

nunnatsa commented 1 week ago

Trying to fix here: https://github.com/kubevirt/hyperconverged-cluster-operator/pull/3013

kgfathur commented 1 week ago

@tiraboschi - the operatorhub installation guide, says to install with

$ kubectl create -f https://operatorhub.io/install/stable/community-kubevirt-hyperconverged.yaml

the content of the file is:

apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: my-community-kubevirt-hyperconverged
  namespace: operators # << ======= WRONG NS
spec:
  channel: stable
  name: community-kubevirt-hyperconverged
  source: operatorhubio-catalog
  sourceNamespace: olm

Do we have any control over this file?

Beside of the wrong namespace, I think the installation guide on the operatorhub/artifacthub is missing the OperatorGroup CR. From this repo docs Deploy HCO via OLM, it shows we must create a namespace, subscription and an OperatorGroup to deploy HCO via OLM.

However, the installation guide/manifest from operatorhub/artifacthub, missing the namespace and OperatorGroup.

# original guide
# $ kubectl create -f https://operatorhub.io/install/community-kubevirt-hyperconverged.yaml

# But currently we need to change the namespace to `kubevirt-hyperconverged`
$ curl -sfL https://operatorhub.io/install/community-kubevirt-hyperconverged.yaml -o community-kubevirt-hyperconverged.yaml
$ vim community-kubevirt-hyperconverged.yaml # edit the namespace (I also edit the channel to test the v1.11.0)
$ kubectl create -f community-kubevirt-hyperconverged.yaml
Error from server (NotFound): error when creating "community-kubevirt-hyperconverged.yaml": namespaces "kubevirt-hyperconverged" not found

After we create namespace and then deploy HCO via OLM:

$ kubectl create namespace kubevirt-hyperconverged
namespace/kubevirt-hyperconverged created

$ kubectl create -f community-kubevirt-hyperconverged.yaml
subscription.operators.coreos.com/kubevirt-hyperconverged created

$ kubectl get subscriptions -n kubevirt-hyperconverged
NAME                      PACKAGE                             SOURCE                  CHANNEL
kubevirt-hyperconverged   community-kubevirt-hyperconverged   operatorhubio-catalog   1.11.0

The operator deployment via OLM will still be failed, because the operatorgroup not exists:

kubectl logs -n olm catalog-operator-6fbb6bd9bb-qfrh7
...
E0627 05:57:07.871296       1 queueinformer_operator.go:319] sync "kubevirt-hyperconverged" failed: found 0 operatorGroups, expected 1
time="2024-06-27T05:57:07Z" level=info msg="resolving sources" id=lRzgG namespace=kubevirt-hyperconverged
time="2024-06-27T05:57:07Z" level=info msg="checking if subscriptions need update" id=lRzgG namespace=kubevirt-hyperconverged
time="2024-06-27T05:57:08Z" level=info msg="resolving sources" id=e2O2j namespace=kubevirt-hyperconverged
time="2024-06-27T05:57:08Z" level=info msg="checking if subscriptions need update" id=e2O2j namespace=kubevirt-hyperconverged
E0627 05:57:08.271025       1 queueinformer_operator.go:319] sync "kubevirt-hyperconverged" failed: found 0 operatorGroups, expected 1

So, we need to create the namespace and OperatorGroup as described on Deploy HCO via OLM. If the installation guide or the sample manifest on operatorhub/artifacthub can be improved (to include the namespace & operatorgroup), it would be great and help the new user (like me) that trying to deploy the kubevirt using hyperconverged-cluster-operator. Thank you!

kgfathur commented 1 week ago

After adjusting manifest for the required resources:

# community-kubevirt-hyperconverged.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: kubevirt-hyperconverged
---
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
    name: kubevirt-hyperconverged-group
    namespace: kubevirt-hyperconverged
---
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: kubevirt-hyperconverged
  namespace: kubevirt-hyperconverged
spec:
  channel: 1.11.0   # current stable channel has been updated to v1.11.1, but i want to test the v1.11.0
  name: community-kubevirt-hyperconverged
  source: operatorhubio-catalog
  sourceNamespace: olm

After creating the required resources, HCO v1.11.0 via OLM successfully deployed:

$ kubectl create -f community-kubevirt-hyperconverged.yaml
namespace/kubevirt-hyperconverged created
operatorgroup.operators.coreos.com/kubevirt-hyperconverged-group created
subscription.operators.coreos.com/kubevirt-hyperconverged created

$ kubectl get subscriptions -n kubevirt-hyperconverged
NAME                      PACKAGE                             SOURCE                  CHANNEL
kubevirt-hyperconverged   community-kubevirt-hyperconverged   operatorhubio-catalog   1.11.0

$ kubectl get csv -n kubevirt-hyperconverged
NAME                                       DISPLAY                                    VERSION   REPLACES   PHASE
kubevirt-hyperconverged-operator.v1.11.0   KubeVirt HyperConverged Cluster Operator   1.11.0               Succeeded

All of the resources deployed successfully on the kubevirt-hyperconverged as expected.

$ kubectl get pod -n kubevirt-hyperconverged
NAME                                                   READY   STATUS    RESTARTS       AGE
aaq-operator-6f78c54bcb-vvr9r                          1/1     Running   0              3h9m
bridge-marker-5dm6w                                    1/1     Running   0              170m
bridge-marker-6rwkf                                    1/1     Running   0              170m
bridge-marker-77ttb                                    1/1     Running   0              170m
bridge-marker-92fkh                                    1/1     Running   0              170m
bridge-marker-98zz7                                    1/1     Running   0              170m
bridge-marker-btpds                                    1/1     Running   0              170m
bridge-marker-cwt7p                                    1/1     Running   0              170m
bridge-marker-l2br4                                    1/1     Running   0              170m
bridge-marker-nqktw                                    1/1     Running   0              170m
bridge-marker-ps827                                    1/1     Running   0              170m
bridge-marker-rstl9                                    1/1     Running   0              170m
bridge-marker-wcqgn                                    1/1     Running   0              170m
cdi-apiserver-64d85bb898-kmnrn                         1/1     Running   0              170m
cdi-deployment-78c94b68dc-tdlmr                        1/1     Running   0              170m
cdi-operator-9dc84cdfd-967gl                           1/1     Running   0              3h9m
cdi-uploadproxy-7779ddfc6b-vn4kt                       1/1     Running   0              170m
cluster-network-addons-operator-59dddbff8d-p7sx8       2/2     Running   0              3h31m
hco-operator-67fdbfdd47-tfjp4                          1/1     Running   0              3h31m
hco-webhook-b76849c99-g84st                            1/1     Running   0              3h31m
hostpath-provisioner-operator-9f9cdc4c4-m2h24          1/1     Running   0              3h9m
hyperconverged-cluster-cli-download-78c79cd978-bwl65   1/1     Running   0              3h31m
kube-cni-linux-bridge-plugin-2lknr                     1/1     Running   0              170m
kube-cni-linux-bridge-plugin-48bx6                     1/1     Running   0              170m
kube-cni-linux-bridge-plugin-7qqdb                     1/1     Running   0              170m
kube-cni-linux-bridge-plugin-ckqk2                     1/1     Running   0              170m
kube-cni-linux-bridge-plugin-j6vlt                     1/1     Running   0              170m
kube-cni-linux-bridge-plugin-krx9n                     1/1     Running   0              170m
kube-cni-linux-bridge-plugin-l9vf9                     1/1     Running   0              170m
kube-cni-linux-bridge-plugin-mrlbq                     1/1     Running   0              170m
kube-cni-linux-bridge-plugin-rd297                     1/1     Running   0              170m
kube-cni-linux-bridge-plugin-rxszf                     1/1     Running   0              170m
kube-cni-linux-bridge-plugin-txtx9                     1/1     Running   0              170m
kube-cni-linux-bridge-plugin-zhbr5                     1/1     Running   0              170m
kubemacpool-cert-manager-5d7967c84-x5sdw               1/1     Running   0              170m
kubemacpool-mac-controller-manager-7855d88fc7-22kbz    2/2     Running   0              170m
mtq-operator-9b674769-qmksv                            1/1     Running   0              3h9m
multus-2nfmr                                           1/1     Running   0              170m
multus-5kghg                                           1/1     Running   0              170m
multus-9pr8f                                           1/1     Running   0              170m
multus-bsz6c                                           1/1     Running   0              170m
multus-f4kds                                           1/1     Running   0              170m
multus-fksfd                                           1/1     Running   0              170m
multus-gdwtj                                           1/1     Running   0              170m
multus-kp5s5                                           1/1     Running   0              170m
multus-n795x                                           1/1     Running   0              170m
multus-p7xbp                                           1/1     Running   0              170m
multus-t98l5                                           1/1     Running   0              170m
multus-vt5fs                                           1/1     Running   0              170m
ssp-operator-7486bd78fc-xbnz9                          1/1     Running   1 (169m ago)   3h31m
virt-api-56bdddd94-76bhr                               1/1     Running   0              169m
virt-api-56bdddd94-9r67q                               1/1     Running   0              169m
virt-controller-5956594b98-547cf                       1/1     Running   0              165m
virt-controller-5956594b98-zthrz                       1/1     Running   0              165m
virt-exportproxy-57968cd7bc-l6tgb                      1/1     Running   0              165m
virt-exportproxy-57968cd7bc-vlndt                      1/1     Running   0              165m
virt-handler-9ddjs                                     1/1     Running   0              165m
virt-handler-kn6g6                                     1/1     Running   0              165m
virt-handler-kxdhz                                     1/1     Running   0              165m
virt-handler-lkzgp                                     1/1     Running   0              165m
virt-handler-rj7md                                     1/1     Running   0              165m
virt-handler-w95d4                                     1/1     Running   0              165m
virt-operator-6f6c884b65-pd7w4                         1/1     Running   0              3h9m
virt-operator-6f6c884b65-qgg86                         1/1     Running   0              3h9m

Successfully test to create DataVolume and VirtualMachine:

$ kubectl create -f vm.fedora.cdi.yaml
datavolume.cdi.kubevirt.io/fedora-cdi created
virtualmachine.kubevirt.io/fedora-cdi created

$ kubectl get dv
NAME         PHASE       PROGRESS   RESTARTS   AGE
fedora-cdi   Succeeded   100.0%     1          2m24s

$ kubectl get vm
NAME         AGE     STATUS    READY
fedora-cdi   2m31s   Running   True

$ virtctl console fedora-cdi
Successfully connected to fedora-cdi console. The escape sequence is ^]

fedora-cdi login: root
Password:
[root@fedora-cdi ~]# uname -a
Linux fedora-cdi 6.8.5-301.fc40.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Apr 11 20:00:10 UTC 2024 x86_64 GNU/Linux
[root@fedora-cdi ~]#

Thank you @tiraboschi @nunnatsa for your help! Now, I see the stable channel on operatorhub has been fixed & updated to v1.11.1 as #3009, will test it also.

tiraboschi commented 1 week ago

Do we have any control over this file?

Beside of the wrong namespace, I think the installation guide on the operatorhub/artifacthub is missing the OperatorGroup CR. From this repo docs Deploy HCO via OLM, it shows we must create a namespace, subscription and an OperatorGroup to deploy HCO via OLM.

However, the installation guide/manifest from operatorhub/artifacthub, missing the namespace and OperatorGroup.

# original guide
# $ kubectl create -f https://operatorhub.io/install/community-kubevirt-hyperconverged.yaml

# But currently we need to change the namespace to `kubevirt-hyperconverged`
$ curl -sfL https://operatorhub.io/install/community-kubevirt-hyperconverged.yaml -o community-kubevirt-hyperconverged.yaml
$ vim community-kubevirt-hyperconverged.yaml # edit the namespace (I also edit the channel to test the v1.11.0)
$ kubectl create -f community-kubevirt-hyperconverged.yaml
Error from server (NotFound): error when creating "community-kubevirt-hyperconverged.yaml": namespaces "kubevirt-hyperconverged" not found

That text is somehow hard-coded into the frontend modal of the operatorhub website, see: https://github.com/operator-framework/operatorhub.io/blob/fcee0e77a41f01775f62bd9a840f0787c3e7a951/frontend/src/components/modals/InstallModal.tsx#L105

And since we support also the AllNamespaces (for a different reason) the operatorhub website is considering us as a globalOperator: https://github.com/operator-framework/operatorhub.io/blob/fcee0e77a41f01775f62bd9a840f0787c3e7a951/frontend/src/utils/operatorUtils.ts#L93

so it's suggesting to install it in the operators namespace where a generic OperatorGroup is already available and so that step is also skipped.

We need to understand if and how we can publish a custom install help in the OperatorHub website.

Thank you @tiraboschi @nunnatsa for your help! Now, I see the stable channel on operatorhub has been fixed & updated to v1.11.1 as https://github.com/kubevirt/hyperconverged-cluster-operator/pull/3009, will test it also.

Thanks for having reported it, I'm going to close this.