kubevirt / containerized-data-importer

Data Import Service for kubernetes, designed with kubevirt in mind.
Apache License 2.0
420 stars 261 forks source link

virDBusGetSystemBus:109 : internal error: Unable to get DBus system bus connection: Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory #461

Closed shiywang closed 5 years ago

shiywang commented 6 years ago

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

/kind bug /kind enhancement

What happened: oc create -f

apiVersion: kubevirt.io/v1alpha2
kind: VirtualMachine
metadata:
  labels:
    kubevirt.io/vm: vm-alpine-datavolume
  name: vm-alpine-datavolume
spec:
  running: false
  template:
    metadata:
      labels:
        kubevirt.io/vm: vm-alpine-datavolume
    spec:
      domain:
        devices:
          disks:
          - disk:
              bus: virtio
            name: datavolumedisk1
            volumeName: datavolumevolume1
        resources:
          requests:
            memory: 64M
      volumes:
      - dataVolume:
          name: alpine-dv
        name: datavolumevolume1
  dataVolumeTemplates:
  - metadata:
      name: alpine-dv
    spec:
      pvc:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 2Gi
      source:
        http:
          url: https://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img

➜  kubevirt-ansible git:(clone) ✗ virtctl start vm-alpine-datavolume
VM vm-alpine-datavolume was scheduled to start

➜  kubevirt-ansible git:(clone) ✗ oc get pods
NAME                                       READY     STATUS      RESTARTS   AGE
cdi-deployment-767b445c45-92jj6            1/1       Running     0          2d
importer-alpine-dv-clvfw                   0/1       Completed   0          2h
virt-launcher-vm-alpine-datavolume-ntvq2   0/1       Error       0          5m
➜  kubevirt-ansible git:(clone) ✗ oc logs -f virt-launcher-vm-alpine-datavolume-ntvq2
level=info timestamp=2018-09-20T09:18:36.911477Z pos=manager.go:69 component=virt-launcher msg="Collected all requested hook sidecar sockets"
level=info timestamp=2018-09-20T09:18:36.911671Z pos=manager.go:72 component=virt-launcher msg="Sorted all collected sidecar sockets per hook point based on their priority and name: map[]"
level=info timestamp=2018-09-20T09:18:36.912908Z pos=libvirt.go:261 component=virt-launcher msg="Connecting to libvirt daemon: qemu:///system"
+ mkdir -p /var/log/kubevirt
+ touch /var/log/kubevirt/qemu-kube.log
+ chown qemu:qemu /var/log/kubevirt/qemu-kube.log
+ [[ -z '' ]]
++ ip -o -4 a
++ tr -s ' '
++ cut '-d ' -f 2
++ grep -v -e '^lo[0-9:]*$'
++ head -1
+ LIBVIRTD_DEFAULT_NETWORK_DEVICE=eth0
+ echo 'Selected "eth0" as primary interface'
+ [[ -n eth0 ]]
+ echo 'Setting libvirt default network to "eth0"'
+ mkdir -p /etc/libvirt/qemu/networks/autostart
+ cat
+ ln -s -f /etc/libvirt/qemu/networks/default.xml /etc/libvirt/qemu/networks/autostart/default.xml
+ echo 'cgroup_controllers = [ ]'
+ '[' -d /dev/hugepages ']'
+ echo 'log_outputs = "1:stderr"'
+ /usr/sbin/libvirtd
2018-09-20 09:18:37.015+0000: 48: info : libvirt version: 4.2.0, package: 1.fc28 (Unknown, 2018-04-04-03:04:18, a0570af3fea64d0ba2df52242c71403f)
2018-09-20 09:18:37.015+0000: 48: info : hostname: vm-alpine-datavolume
2018-09-20 09:18:37.015+0000: 48: error : virDBusGetSystemBus:109 : internal error: Unable to get DBus system bus connection: Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory
2018-09-20 09:18:37.569+0000: 48: error : virDBusGetSystemBus:109 : internal error: Unable to get DBus system bus connection: Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory
2018-09-20 09:18:37.569+0000: 48: warning : networkStateInitialize:763 : DBus not available, disabling firewalld support in bridge_network_driver: internal error: Unable to get DBus system bus connection: Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory
2018-09-20 09:18:37.608+0000: 52: error : virPCIDeviceConfigOpen:312 : Failed to open config space file '/sys/bus/pci/devices/0000:00:00.0/config': Read-only file system
2018-09-20 09:18:37.608+0000: 52: error : virPCIDeviceConfigOpen:312 : Failed to open config space file '/sys/bus/pci/devices/0000:00:01.0/config': Read-only file system
2018-09-20 09:18:37.608+0000: 52: error : virPCIDeviceConfigOpen:312 : Failed to open config space file '/sys/bus/pci/devices/0000:00:01.1/config': Read-only file system
2018-09-20 09:18:37.610+0000: 52: error : virPCIDeviceConfigOpen:312 : Failed to open config space file '/sys/bus/pci/devices/0000:00:01.2/config': Read-only file system
2018-09-20 09:18:37.611+0000: 52: error : virPCIDeviceConfigOpen:312 : Failed to open config space file '/sys/bus/pci/devices/0000:00:01.3/config': Read-only file system
2018-09-20 09:18:37.614+0000: 52: error : virPCIDeviceConfigOpen:312 : Failed to open config space file '/sys/bus/pci/devices/0000:00:02.0/config': Read-only file system
2018-09-20 09:18:37.617+0000: 52: error : virPCIDeviceConfigOpen:312 : Failed to open config space file '/sys/bus/pci/devices/0000:00:03.0/config': Read-only file system
2018-09-20 09:18:37.617+0000: 52: error : virPCIDeviceConfigOpen:312 : Failed to open config space file '/sys/bus/pci/devices/0000:00:04.0/config': Read-only file system
2018-09-20 09:18:37.617+0000: 52: error : virPCIDeviceConfigOpen:312 : Failed to open config space file '/sys/bus/pci/devices/0000:00:05.0/config': Read-only file system
2018-09-20 09:18:37.618+0000: 52: error : virPCIDeviceConfigOpen:312 : Failed to open config space file '/sys/bus/pci/devices/0000:00:06.0/config': Read-only file system
2018-09-20 09:18:37.618+0000: 52: error : virPCIDeviceConfigOpen:312 : Failed to open config space file '/sys/bus/pci/devices/0000:00:07.0/config': Read-only file system
2018-09-20 09:18:38.979+0000: 48: error : virCommandWait:2600 : internal error: Child process (/usr/sbin/dmidecode -q -t 0,1,2,3,4,17) unexpected exit status 1: /dev/mem: No such file or directory

2018-09-20 09:18:38.990+0000: 48: error : virNodeSuspendSupportsTarget:336 : internal error: Cannot probe for supported suspend types
2018-09-20 09:18:38.990+0000: 48: warning : virQEMUCapsInit:1229 : Failed to get host power management capabilities
level=info timestamp=2018-09-20T09:18:46.914711Z pos=libvirt.go:276 component=virt-launcher msg="Connected to libvirt daemon"
level=info timestamp=2018-09-20T09:18:46.922540Z pos=virt-launcher.go:143 component=virt-launcher msg="Watchdog file created at /var/run/kubevirt/watchdog-files/golden-images_vm-alpine-datavolume"
level=info timestamp=2018-09-20T09:18:46.922930Z pos=client.go:152 component=virt-launcher msg="Registered libvirt event notify callback"
level=info timestamp=2018-09-20T09:18:46.923018Z pos=virt-launcher.go:60 component=virt-launcher msg="Marked as ready"
panic: timed out waiting for domain to be defined

goroutine 1 [running]:
main.waitForDomainUUID(0x45d964b800, 0x144b6c0, 0xc4204260c0, 0xc4202380e0, 0xc4205c16e0)
    /root/go/src/kubevirt.io/kubevirt/cmd/virt-launcher/virt-launcher.go:219 +0x2cc
main.main()
    /root/go/src/kubevirt.io/kubevirt/cmd/virt-launcher/virt-launcher.go:333 +0x81d
2018-09-20 09:23:47.047+0000: 32: error : virNetSocketReadWire:1809 : End of file while reading data: Input/output error
virt-launcher exited with code 2

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

since I don't have latest env, so I have to update kubevirt manually when I was trying to apply manifest, I found virt-handler unchanged which is weird

➜  kubevirt-ansible git:(clone) ✗ oc apply -f ~/Downloads/kubevirt.yaml 
clusterrole "kubevirt.io:admin" configured
clusterrole "kubevirt.io:edit" configured
clusterrole "kubevirt.io:view" configured
serviceaccount "kubevirt-apiserver" unchanged
clusterrolebinding "kubevirt-apiserver" configured
clusterrolebinding "kubevirt-apiserver-auth-delegator" configured
rolebinding "kubevirt-apiserver" unchanged
role "kubevirt-apiserver" unchanged
clusterrole "kubevirt-apiserver" configured
clusterrole "kubevirt-controller" configured
serviceaccount "kubevirt-controller" unchanged
serviceaccount "kubevirt-privileged" unchanged
clusterrolebinding "kubevirt-controller" configured
clusterrolebinding "kubevirt-privileged-cluster-admin" configured
clusterrole "kubevirt.io:default" configured
clusterrolebinding "kubevirt.io:default" configured
service "virt-api" unchanged
deployment "virt-api" unchanged
deployment "virt-controller" unchanged
daemonset "virt-handler" unchanged
customresourcedefinition "virtualmachineinstances.kubevirt.io" configured
customresourcedefinition "virtualmachineinstancereplicasets.kubevirt.io" configured
customresourcedefinition "virtualmachineinstancepresets.kubevirt.io" configured
customresourcedefinition "virtualmachines.kubevirt.io" configured
➜  kubevirt-ansible git:(clone) ✗  oc get pod -n kube-system 
NAME                                                           READY     STATUS    RESTARTS   AGE
master-api-cnv-executor-shiywang-master1.example.com           1/1       Running   0          2d
master-controllers-cnv-executor-shiywang-master1.example.com   1/1       Running   0          2d
master-etcd-cnv-executor-shiywang-master1.example.com          1/1       Running   0          2d
virt-api-5bd7d86b5c-bds8s                                      1/1       Running   0          3h
virt-api-5bd7d86b5c-lf8rf                                      1/1       Running   0          3h
virt-controller-6cf64f699c-bxq9s                               1/1       Running   0          3h
virt-controller-6cf64f699c-vjxb6                               1/1       Running   0          3h
virt-handler-hdvcv                                             1/1       Running   0          2d
virt-handler-rm5pb                                             1/1       Running   0          2d

manifest I used: https://github.com/kubevirt/kubevirt/releases/download/v0.8.0/kubevirt.yaml

awels commented 6 years ago

Looking at the error, it appears to be a kubevirt issue with accessing some network stuff (they have been making a lot of changes in that lately). Does the same happen when you create VM that doesn't use data volumes?

kubevirt-bot commented 5 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

/lifecycle stale

kubevirt-bot commented 5 years ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

/lifecycle rotten

kubevirt-bot commented 5 years ago

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

/close

kubevirt-bot commented 5 years ago

@kubevirt-bot: Closing this issue.

In response to [this](https://github.com/kubevirt/containerized-data-importer/issues/461#issuecomment-464451194): >Rotten issues close after 30d of inactivity. >Reopen the issue with `/reopen`. >Mark the issue as fresh with `/remove-lifecycle rotten`. > >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.