kubevirt / kubevirt

Kubernetes Virtualization API and runtime in order to define and manage virtual machines.
https://kubevirt.io
Apache License 2.0
5.43k stars 1.31k forks source link

pod didn't trigger scale-up: 1 Insufficient devices.kubevirt.io/kvm #12210

Closed oyzl1230 closed 2 months ago

oyzl1230 commented 2 months ago

What happened: I have deploy the Kubevirt v1.2.0 on AKS, but failed to launcher the VM, the error message is "pod didn't trigger scale-up: 1 Insufficient devices.kubevirt.io/kvm" Could you please help, whether the Kubevirt support deploy on Azure AKS, or there any other special setting need to config for deploy Kubevirt on Azure AKS. thanks

Information: I have reference the guideline(https://kubevirt.io/quickstart_cloud) to deploy the Kubevirt operator on Azure AKS cluster, and the deployment looks like successful.

kubectl get all -n kubevirt Warning: kubevirt.io/v1 VirtualMachineInstancePresets is now deprecated and will be removed in v2. NAME READY STATUS RESTARTS AGE pod/virt-api-d8479847d-645nv 1/1 Running 0 54m pod/virt-api-d8479847d-fqgfp 1/1 Running 0 54m pod/virt-controller-858dcdfc85-fmc4j 1/1 Running 0 54m pod/virt-controller-858dcdfc85-h827l 1/1 Running 0 54m pod/virt-handler-8vmjk 1/1 Running 0 54m pod/virt-handler-h8spf 1/1 Running 0 54m pod/virt-handler-ktdmt 1/1 Running 0 54m pod/virt-handler-r59ff 1/1 Running 0 54m pod/virt-handler-wg2gv 1/1 Running 0 54m pod/virt-handler-z4t9x 1/1 Running 0 54m pod/virt-operator-86dbbc4f6c-j6jhb 1/1 Running 0 57m pod/virt-operator-86dbbc4f6c-m89hr 1/1 Running 1 (57m ago) 57m

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubevirt-operator-webhook ClusterIP 10.0.195.102 443/TCP 54m service/kubevirt-prometheus-metrics ClusterIP None 443/TCP 54m service/virt-api ClusterIP 10.0.63.61 443/TCP 54m service/virt-exportproxy ClusterIP 10.0.78.230 443/TCP 54m

NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/virt-handler 6 6 6 6 6 kubernetes.io/os=linux 54m

NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/virt-api 2/2 2 2 54m deployment.apps/virt-controller 2/2 2 2 54m deployment.apps/virt-operator 2/2 2 2 57m

NAME DESIRED CURRENT READY AGE replicaset.apps/virt-api-d8479847d 2 2 2 54m replicaset.apps/virt-controller-858dcdfc85 2 2 2 54m replicaset.apps/virt-operator-86dbbc4f6c 2 2 2 57m

NAME AGE PHASE kubevirt.kubevirt.io/kubevirt 55m Deployed

and then refer link (https://kubevirt.io/labs/kubernetes/lab1.html) to deploy the test VM, but failed, and the error message as: 0/8 nodes are available: 2 node(s) didn't match Pod's node affinity/selector, 6 Insufficient devices.kubevirt.io/kvm. preemption: 0/8 nodes are available: 2 Preemption is not helpful for scheduling, 6 No preemption victims found for incoming pod..

pod didn't trigger scale-up: 1 Insufficient devices.kubevirt.io/kvm

the information for virt-launcher as below:

kubectl describe pods virt-launcher-testvm-lmc8z Name: virt-launcher-testvm-lmc8z Namespace: default Priority: 0 Node: Labels: kubevirt.io=virt-launcher kubevirt.io/created-by=66e21ad3-5baa-4272-946f-fd98bd060af0 kubevirt.io/domain=testvm kubevirt.io/size=small vm.kubevirt.io/name=testvm Annotations: kubectl.kubernetes.io/default-container: compute kubevirt.io/domain: testvm kubevirt.io/migrationTransportUnix: true kubevirt.io/vm-generation: 2 post.hook.backup.velero.io/command: ["/usr/bin/virt-freezer", "--unfreeze", "--name", "testvm", "--namespace", "default"] post.hook.backup.velero.io/container: compute pre.hook.backup.velero.io/command: ["/usr/bin/virt-freezer", "--freeze", "--name", "testvm", "--namespace", "default"] pre.hook.backup.velero.io/container: compute traffic.sidecar.istio.io/kubevirtInterfaces: k6t-eth0 Status: Pending IP: IPs: Controlled By: VirtualMachineInstance/testvm Init Containers: container-disk-binary: Image: quay.io/kubevirt/virt-launcher:v1.2.0 Port: Host Port: Command: /usr/bin/cp /usr/bin/container-disk /init/usr/bin/container-disk Limits: cpu: 100m memory: 40M Requests: cpu: 10m memory: 1M Environment: XDG_CACHE_HOME: /var/run/kubevirt-private XDG_CONFIG_HOME: /var/run/kubevirt-private XDG_RUNTIME_DIR: /var/run Mounts: /init/usr/bin from virt-bin-share-dir (rw) volumecontainerdisk-init: Image: quay.io/kubevirt/cirros-container-disk-demo Port: Host Port: Command: /usr/bin/container-disk Args: --no-op Limits: cpu: 10m memory: 40M Requests: cpu: 1m ephemeral-storage: 50M memory: 1M Environment: Mounts: /usr/bin from virt-bin-share-dir (rw) /var/run/kubevirt-ephemeral-disks/container-disk-data/66e21ad3-5baa-4272-946f-fd98bd060af0 from container-disks (rw) Containers: compute: Image: quay.io/kubevirt/virt-launcher:v1.2.0 Port: Host Port: Command: /usr/bin/virt-launcher-monitor --qemu-timeout 309s --name testvm --uid 66e21ad3-5baa-4272-946f-fd98bd060af0 --namespace default --kubevirt-share-dir /var/run/kubevirt --ephemeral-disk-dir /var/run/kubevirt-ephemeral-disks --container-disk-dir /var/run/kubevirt/container-disks --grace-period-seconds 45 --hook-sidecars 0 --ovmf-path /usr/share/OVMF --run-as-nonroot Limits: devices.kubevirt.io/kvm: 1 devices.kubevirt.io/tun: 1 devices.kubevirt.io/vhost-net: 1 Requests: cpu: 100m devices.kubevirt.io/kvm: 1 devices.kubevirt.io/tun: 1 devices.kubevirt.io/vhost-net: 1 ephemeral-storage: 50M memory: 317880392 Environment: XDG_CACHE_HOME: /var/run/kubevirt-private XDG_CONFIG_HOME: /var/run/kubevirt-private XDG_RUNTIME_DIR: /var/run POD_NAME: virt-launcher-testvm-lmc8z (v1:metadata.name) Mounts: /var/run/kubevirt from public (rw) /var/run/kubevirt-ephemeral-disks from ephemeral-disks (rw) /var/run/kubevirt-private from private (rw) /var/run/kubevirt/container-disks from container-disks (rw) /var/run/kubevirt/hotplug-disks from hotplug-disks (rw) /var/run/kubevirt/sockets from sockets (rw) /var/run/libvirt from libvirt-runtime (rw) volumecontainerdisk: Image: quay.io/kubevirt/cirros-container-disk-demo Port: Host Port: Command: /usr/bin/container-disk Args: --copy-path /var/run/kubevirt-ephemeral-disks/container-disk-data/66e21ad3-5baa-4272-946f-fd98bd060af0/disk_0 Limits: cpu: 10m memory: 40M Requests: cpu: 1m ephemeral-storage: 50M memory: 1M Environment: Mounts: /usr/bin from virt-bin-share-dir (rw) /var/run/kubevirt-ephemeral-disks/container-disk-data/66e21ad3-5baa-4272-946f-fd98bd060af0 from container-disks (rw) guest-console-log: Image: quay.io/kubevirt/virt-launcher:v1.2.0 Port: Host Port: Command: /usr/bin/virt-tail Args: --logfile /var/run/kubevirt-private/66e21ad3-5baa-4272-946f-fd98bd060af0/virt-serial0-log Limits: cpu: 15m memory: 60M Requests: cpu: 5m memory: 35M Environment: VIRT_LAUNCHER_LOG_VERBOSITY: 2 Mounts: /var/run/kubevirt-private from private (ro) Readiness Gates: Type Status kubevirt.io/virtual-machine-unpaused True Conditions: Type Status PodScheduled False kubevirt.io/virtual-machine-unpaused True Volumes: private: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: public: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: sockets: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: virt-bin-share-dir: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: libvirt-runtime: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: ephemeral-disks: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: container-disks: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: hotplug-disks: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: QoS Class: Burstable Node-Selectors: kubernetes.io/arch=amd64 kubevirt.io/schedulable=true Tolerations: devices.kubevirt.io/kvm:NoSchedule op=Exists devices.kubevirt.io/tun:NoSchedule op=Exists devices.kubevirt.io/vhost-net:NoSchedule op=Exists node.kubernetes.io/memory-pressure:NoSchedule op=Exists node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message


Warning FailedScheduling 47m default-scheduler 0/8 nodes are available: 2 node(s) didn't match Pod's node affinity/selector, 6 Insufficient devices.kubevirt.io/kvm. preemption: 0/8 nodes are available: 2 Preemption is not helpful for scheduling, 6 No preemption victims found for incoming pod.. Warning FailedScheduling 15m (x7 over 42m) default-scheduler 0/8 nodes are available: 2 node(s) didn't match Pod's node affinity/selector, 6 Insufficient devices.kubevirt.io/kvm. preemption: 0/8 nodes are available: 2 Preemption is not helpful for scheduling, 6 No preemption victims found for incoming pod.. Normal NotTriggerScaleUp 2m7s (x271 over 47m) cluster-autoscaler pod didn't trigger scale-up: 1 Insufficient devices.kubevirt.io/kvm

Environment:

xpivarc commented 2 months ago

Hi, the cloud providers usually don't enable a nested virtualization. The devices.kubevirt.io/kvm means you cannot run without the kubectl -n kubevirt patch kubevirt kubevirt --type=merge --patch '{"spec":{"configuration":{"developerConfiguration":{"useEmulation":true}}}}' that is mentioned in the guide. Note: this is just for testing, the performance of the VM will not be great.

xpivarc commented 2 months ago

/close Please reopen if you have any other question

kubevirt-bot commented 2 months ago

@xpivarc: Closing this issue.

In response to [this](https://github.com/kubevirt/kubevirt/issues/12210#issuecomment-2188274191): >/close >Please reopen if you have any other question Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes-sigs/prow](https://github.com/kubernetes-sigs/prow/issues/new?title=Prow%20issue:) repository.