What happened:
I am unable to access a newly deployed VM. The output of kubectl get vmi shows that the VM is running and ready but I believe it is not fully initializing as I am unable to access the VM via virtctl console / virtctl ssh and there are no guest console logs from the virt-launcher pod. As a note, I deployed the Kubernetes cluster using K0s.
All nodes in the cluster are passing qemu validation:
node3:~$ virt-host-validate qemu
QEMU: Checking if device /dev/kvm exists : PASS
QEMU: Checking if device /dev/kvm is accessible : PASS
QEMU: Checking if device /dev/vhost-net exists : PASS
QEMU: Checking if device /dev/net/tun exists : PASS
QEMU: Checking for cgroup 'cpu' controller support : PASS
QEMU: Checking for cgroup 'cpuacct' controller support : PASS
QEMU: Checking for cgroup 'cpuset' controller support : PASS
QEMU: Checking for cgroup 'memory' controller support : PASS
QEMU: Checking for cgroup 'devices' controller support : PASS
QEMU: Checking for cgroup 'blkio' controller support : PASS
QEMU: Checking for device assignment IOMMU support : WARN (Unknown if this platform has IOMMU support)
QEMU: Checking for secure guest support : WARN (Unknown if this platform has Secure Guest support)
Kubevirt components:
$ kubectl get all -n kubevirt
Warning: kubevirt.io/v1 VirtualMachineInstancePresets is now deprecated and will be removed in v2.
NAME READY STATUS RESTARTS AGE
pod/virt-api-64d75d4f5-66vxg 1/1 Running 0 22h
pod/virt-api-64d75d4f5-rl6cn 1/1 Running 0 22h
pod/virt-controller-64d65c6684-ggwlc 1/1 Running 0 22h
pod/virt-controller-64d65c6684-xqx7m 1/1 Running 0 22h
pod/virt-handler-82vdv 1/1 Running 0 22h
pod/virt-handler-fsvz8 1/1 Running 0 22h
pod/virt-handler-l664w 1/1 Running 0 22h
pod/virt-operator-6c89df8955-jrjf9 1/1 Running 0 22h
pod/virt-operator-6c89df8955-r9wkj 1/1 Running 0 22h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubevirt-operator-webhook ClusterIP 10.101.225.75 <none> 443/TCP 22h
service/kubevirt-prometheus-metrics ClusterIP None <none> 443/TCP 22h
service/virt-api ClusterIP 10.96.236.192 <none> 443/TCP 22h
service/virt-exportproxy ClusterIP 10.110.33.182 <none> 443/TCP 22h
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/virt-handler 3 3 3 3 3 kubernetes.io/os=linux 22h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/virt-api 2/2 2 2 22h
deployment.apps/virt-controller 2/2 2 2 22h
deployment.apps/virt-operator 2/2 2 2 22h
NAME DESIRED CURRENT READY AGE
replicaset.apps/virt-api-64d75d4f5 2 2 2 22h
replicaset.apps/virt-controller-64d65c6684 2 2 2 22h
replicaset.apps/virt-operator-6c89df8955 2 2 2 22h
NAME AGE PHASE
kubevirt.kubevirt.io/kubevirt 22h Deployed
############################
$ kubectl get pod,vm,vmi
NAME READY STATUS RESTARTS AGE
pod/virt-launcher-testvm-fhrc2 3/3 Running 0 11m
NAME AGE STATUS READY
virtualmachine.kubevirt.io/testvm 11m Running True
NAME AGE PHASE IP NODENAME READY
virtualmachineinstance.kubevirt.io/testvm 11m Running 10.244.135.7 node3 True
What you expected to happen:
Deploy a working VM using kubevirt.
How to reproduce it (as minimally and precisely as possible):
1) Deploy a K0s kubernetes cluster using k0sctl (https://docs.k0sproject.io/v1.30.0+k0s.0/k0sctl-install/) on a Turing RK1 compute module. Note, I am using Calico with vxlan as my CNI but this did fail with the same issue using kube-router (default CNI with K0s)
2) Install Kubervirt
3) Deploy a test VM following https://kubevirt.io/labs/kubernetes/lab1
Additional context:
My server is using ARM64 architecture and the hardware is Turing RK1 compute modules (https://turingpi.com/product/turing-rk1/). I have been able to successfully deploy a Cirros VM using virsh with cirros-0.5.2-aarch64 image. I have attempted to use an aarch64 image for my kubevirt VM but that also failed to initialize (I used image quay.io/kubevirt/cirros-container-disk-demo:v1.2.2-arm64).
I have been interested in using Kubevirt but I have been running into this same issue when using different Kubernetes deployments (KIND and Minikube). All tests have been done on a Turing Pi RK1 cluster (single node and multi-node).
I have attached the logs from the virt-launcher pod (all containers) and my kubevirt CR object.
Environment:
KubeVirt version (use virtctl version): v1.2.1
Kubernetes version (use kubectl version): v1.30.0+k0s
Cloud provider or hardware configuration: Hardware is baremetal Turing RK1 compute modules (https://turingpi.com/product/turing-rk1/). The cluster is 4 nodes (1 controller and 3 workers) but I had this issue using one RK1 node as a single node cluster.
What happened: I am unable to access a newly deployed VM. The output of
kubectl get vmi
shows that the VM is running and ready but I believe it is not fully initializing as I am unable to access the VM viavirtctl console
/virtctl ssh
and there are no guest console logs from the virt-launcher pod. As a note, I deployed the Kubernetes cluster using K0s. All nodes in the cluster are passing qemu validation:Kubevirt components:
What you expected to happen: Deploy a working VM using kubevirt.
How to reproduce it (as minimally and precisely as possible): 1) Deploy a K0s kubernetes cluster using
k0sctl
(https://docs.k0sproject.io/v1.30.0+k0s.0/k0sctl-install/) on a Turing RK1 compute module. Note, I am using Calico with vxlan as my CNI but this did fail with the same issue using kube-router (default CNI with K0s) 2) Install Kubervirt 3) Deploy a test VM following https://kubevirt.io/labs/kubernetes/lab1Additional context: My server is using ARM64 architecture and the hardware is Turing RK1 compute modules (https://turingpi.com/product/turing-rk1/). I have been able to successfully deploy a Cirros VM using
virsh
with cirros-0.5.2-aarch64 image. I have attempted to use an aarch64 image for my kubevirt VM but that also failed to initialize (I used image quay.io/kubevirt/cirros-container-disk-demo:v1.2.2-arm64).I have been interested in using Kubevirt but I have been running into this same issue when using different Kubernetes deployments (KIND and Minikube). All tests have been done on a Turing Pi RK1 cluster (single node and multi-node).
I have attached the logs from the virt-launcher pod (all containers) and my kubevirt CR object.
Environment:
virtctl version
): v1.2.1kubectl version
): v1.30.0+k0suname -a
): 5.10.160-rockchip aarch64 GNU/Linuxkubevirt-cr-yaml.txt virt-launcher-logs.txt