Closed yuvalif closed 5 years ago
@yuvalif I assume you're referring to this (current) code here, right?
check "VM is running" "( kubectl get vm testvm -o jsonpath='{.status.phase}' ) | grep -q Running"
IIUC you are saying that the Pod was in Running
phase but the VM phase was not reported as Running
?
@fabiand any insight?
Anyway the suggested fix is based on a Pod feature (iscsi-demo-target
) that may change over time, I think we must find something better to identify the relevant Pod if indeed needed.
@yuvalif thanks for reporting the error.
Usually any VM object should have a .status.phase
field - If it's not there, then something with our virt-controller is wrong.
Can you please provide the output of
kubectl get pods --all-namespaces
after your deployment?
$ kubectl get --all-namespaces pods
NAMESPACE NAME READY STATUS RESTARTS AGE
default hello-minikube-c6c6764d-qnqf5 1/1 Running 6 1h
default iscsi-demo-target-tgtd-5674b4f6fd-jtqgl 1/1 Running 5 1h
kube-system kube-addon-manager-minikube 1/1 Running 6 1h
kube-system kube-dns-54cccfbdf8-mh5xm 3/3 Running 18 1h
kube-system kubernetes-dashboard-77d8b98585-6c8cs 1/1 Running 6 1h
kube-system libvirt-pvfjm 2/2 Running 10 1h
kube-system storage-provisioner 1/1 Running 6 1h
kube-system virt-controller-b7b8fd9b7-d9ld9 0/1 Running 5 1h
kube-system virt-controller-b7b8fd9b7-qhkwm 0/1 Running 5 1h
kube-system virt-handler-zjqqc 1/1 Running 5 1h
A kubecvtl describe vm testvm
will give you more context (note the describe). It is very likely that you will see in the attached events why it failed. If that does not show anything, then an kubectl describe pods -l "kubevirt.io/domain=testvm"
would be the next step.
I can confirm this bug. The guide you guys have in README.md
works out of the box, there's no problem with that. The problem happens when you are trying to use another ISO, e.g. the one from Ubuntu server.
I followed the guide of @fabiand https://dummdida.tumblr.com/post/171798262665/running-ubuntu-on-kubernetes-with-kubevirt-v030 but this fails.
First off the VM is visible:
drpaneas@localhost:~> kubectl get vms
NAME AGE
ubuntu 28s
However the status of it is always Scheduled:
drpaneas@localhost:~> kubectl get vm ubuntu -o jsonpath='{.status.phase}'
Scheduled
At the Kubernetes Dashboard I see:
Readiness probe failed: cat: /tmp/healthy: No such file or directory
As a result, I cannot connect with VNC or Serial:
./virtctl vnc --kubeconfig ~/.kube/config ubuntu
remote-viewer connected
Error encountered: Unable to connect to VM because phase is Scheduled instead of Running
drpaneas@localhost:~> kubectl describe pods -l "kubevirt.io/domain=ubuntu"
Name: virt-launcher-ubuntu-rff4s
Namespace: default
Node: minikube/192.168.39.11
Start Time: Thu, 15 Mar 2018 00:12:07 +0100
Labels: kubevirt.io=virt-launcher
kubevirt.io/domain=ubuntu
kubevirt.io/vmUID=1dc3d522-27dd-11e8-bfd6-14487e853c29
Annotations: <none>
Status: Running
IP: 172.17.0.8
Containers:
compute:
Container ID: docker://f1d5dbaf3050d92197a8f59c4f4922deadf194f06eb3cfd77393cbbf596b7764
Image: kubevirt/virt-launcher:v0.3.0
Image ID: docker-pullable://kubevirt/virt-launcher@sha256:4274016b1bc831d8ab304cdf52296727aa5561af34acca7fe3bc8ff24aa5c680
Port: <none>
Command:
/entrypoint.sh
--qemu-timeout
5m
--name
ubuntu
--namespace
default
--kubevirt-share-dir
/var/run/kubevirt
--readiness-file
/tmp/healthy
--grace-period-seconds
45
State: Running
Started: Thu, 15 Mar 2018 00:12:08 +0100
Ready: True
Restart Count: 0
Requests:
memory: 1145892Ki
Readiness: exec [cat /tmp/healthy] delay=2s timeout=5s period=2s #success=1 #failure=5
Environment: <none>
Mounts:
/host-dev from host-dev (rw)
/var/run/kubevirt from virt-share-dir (rw)
/var/run/libvirt from libvirt-runtime (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-2fjgn (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
virt-share-dir:
Type: HostPath (bare host directory volume)
Path: /var/run/kubevirt
HostPathType:
libvirt-runtime:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
host-dev:
Type: HostPath (bare host directory volume)
Path: /dev
HostPathType:
default-token-2fjgn:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-2fjgn
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m default-scheduler Successfully assigned virt-launcher-ubuntu-rff4s to minikube
Normal SuccessfulMountVolume 2m kubelet, minikube MountVolume.SetUp succeeded for volume "libvirt-runtime"
Normal SuccessfulMountVolume 2m kubelet, minikube MountVolume.SetUp succeeded for volume "host-dev"
Normal SuccessfulMountVolume 2m kubelet, minikube MountVolume.SetUp succeeded for volume "virt-share-dir"
Normal SuccessfulMountVolume 2m kubelet, minikube MountVolume.SetUp succeeded for volume "default-token-2fjgn"
Normal Pulled 2m kubelet, minikube Container image "kubevirt/virt-launcher:v0.3.0" already present on machine
Normal Created 2m kubelet, minikube Created container
Normal Started 2m kubelet, minikube Started container
Warning Unhealthy 2m (x4 over 2m) kubelet, minikube Readiness probe failed: cat: /tmp/healthy: No such file or directory
also
drpaneas@localhost:~> kubectl describe vm ubuntu
Name: ubuntu
Namespace: default
Labels: guest=ubuntu
kubevirt.io/nodeName=minikube
kubevirt.io/size=large
Annotations: presets.virtualmachines.kubevirt.io/presets-applied=kubevirt.io/v1alpha1
virtualmachinepreset.kubevirt.io/large=kubevirt.io/v1alpha1
API Version: kubevirt.io/v1alpha1
Kind: VirtualMachine
Metadata:
Cluster Name:
Creation Timestamp: 2018-03-14T23:12:07Z
Generate Name: ubuntu
Generation: 0
Owner References:
API Version: kubevirt.io/v1alpha1
Block Owner Deletion: true
Controller: true
Kind: OfflineVirtualMachine
Name: ubuntu
UID: 1dc3a03b-27dd-11e8-bfd6-14487e853c29
Resource Version: 5638
Self Link: /apis/kubevirt.io/v1alpha1/namespaces/default/virtualmachines/ubuntu
UID: 1dc3d522-27dd-11e8-bfd6-14487e853c29
Spec:
Domain:
Devices:
Disks:
Disk:
Bus: virtio
Name: ubuntu
Volume Name: ubuntu
Features:
Acpi:
Enabled: true
Firmware:
Uuid: 42062b79-f35a-40dd-9da4-728270b2cc4e
Machine:
Type: q35
Resources:
Requests:
Memory: 1Gi
Volumes:
Name: ubuntu
Status:
Conditions:
Last Probe Time: <nil>
Last Transition Time: 2018-03-14T23:12:18Z
Message: server error. command Launcher.Sync failed: disk ubuntu references an unsupported source
Reason: Synchronizing with the Domain failed.
Status: False
Type: Synchronized
Interfaces:
Ip Address: 172.17.0.8
Node Name: minikube
Phase: Scheduled
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning SyncFailed 46s (x17 over 6m) virt-handler, minikube server error. command Launcher.Sync failed: disk ubuntu references an unsupported source
Hi, there is an error in the blog post. The volumes
section of the VM is wrong. The correct VM should look like this:
apiVersion: kubevirt.io/v1alpha1
kind: OfflineVirtualMachine
metadata:
name: ubuntu
spec:
running: true
selector:
matchLabels:
guest: ubuntu
template:
metadata:
labels:
guest: ubuntu
kubevirt.io/size: large
spec:
domain:
devices:
disks:
- name: ubuntu
volumeName: ubuntu
disk:
bus: virtio
volumes:
- name: ubuntu
persistentVolumeClaim:
claimName: ubuntu1710
The persistentVolumeClaim
line was missing in the volumes
section in the blog post.
Darn - I should just copy and paste. Will fix it in the post.
only "get pod" has a "status" field that could show "Running" stage. Maybe: maybe change to:
check "VM is running" "kubectl get pod | grep iscsi-demo-target | grep -q Running"