OpenNebula / one

The open source Cloud & Edge Computing Platform bringing real freedom to your Enterprise Cloud 🚀
http://opennebula.io
Apache License 2.0
1.25k stars 479 forks source link

Support multiple hypervisors on each virtualization node #3259

Open dann1 opened 5 years ago

dann1 commented 5 years ago

Description A Linux OS can run simultaneously KVM and LXD, acting as a virtualization node that deploys containers and VMs. Currently, in order to use a hypervisor in OpenNebula, as both KVM and LXD, there are some limitations:

Use case Properly setup a node as KVM and LXD virtualization node

Interface Changes There could be a lot of changes, since the vmm that is run when deploying a container, is selected based on the destination node, and not on whether the VM template states that the VM is a container or a regular VM. Also the wild VMs would need to be classified.

Additional Context Proxmox treats its virtualization nodes this way, clearly differentiating a container from a VM. In the case of OpenNebula, it would just be marking the hypervisor setting in the template as a required field, and select the vmm_drivers based on that.

Progress Status

Franco-Sparrow commented 1 year ago

@dann1 I would love to see this feature comes true. It would be awsome that multiple hypervisors converge on the same host without these problems you have detailed before. I understand that, as a way to prevent these problems the team stablished these dependencies on hypervisor binaries, in order to dont allow the installation of multiple hypervisors on same host, but this is something that competency did, and I am sure that OpenNebula could do it as well. Having KVM, LXC and Firecracker on same host, on OpenNebula, hope to see it at least for ON 7.0.0.

Keep the hard work 💪

Githopp192 commented 10 months ago

saw this good statement :

https://opennebula.io/blog/experiences/using-lxd-and-kvm-on-the-same-host/

Then i was trying on a RHEL 8.9 system (a opennebula kvm node) to install FireCracker - let's see:

rpm -ivh https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm

Retrieving https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm
warning: /var/tmp/rpm-tmp.Ioitr4: Header V4 RSA/SHA256 Signature, key ID 2f86d6a1: NOKEY

Verifying... ################################# [100%]
Preparing... ################################# [100%]
Updating / installing...
1:epel-release-8-19.el8 ################################# [100%]
Many EPEL packages require the CodeReady Builder (CRB) repository.
It is recommended that you run /usr/bin/crb enable to enable the CRB repository.
[root@nextcentos log]# /usr/bin/crb enable
Enabling CRB repo
Repository 'codeready-builder-for-rhel-8-x86_64-rpms' is enabled for this system.
CRB repo is enabled and named: codeready-builder-for-rhel-8-x86_64-rpms

dnf install opennebula-node-firecracker

Updating Subscription Management repositories.
Red Hat CodeReady Linux Builder for RHEL 8 x86_64 (RPMs) 1.2 MB/s | 8.8 MB 00:07
Last metadata expiration check: 0:00:07 ago on Tue 19 Dec 2023 07:04:03 PM CET.
Error:
Problem: problem with installed package opennebula-node-kvm-6.6.1.1-1.el8.noarch

kitatek commented 5 months ago

Hello,

Aiming to launch KVM guests on a ONE 6.8 LXC testbed hardware to complement the current ONE LXC limitations (unprivileged prevents desktop container)...

Fails at the installation of opennebula-node-kvm:

$ sudo  apt-get -y install opennebula-node-kvm
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following packages were automatically installed and are no longer required:
  bindfs libarchive-tools libfuse2 liblxc-common liblxc1 libpam-cgfs libvncserver1 lxc lxc-utils lxcfs uidmap
Use 'sudo apt autoremove' to remove them.
The following additional packages will be installed:
  libnbd-bin libnbd0
The following packages will be REMOVED:
  opennebula-node-lxc
The following NEW packages will be installed:
  libnbd-bin libnbd0 opennebula-node-kvm
0 upgraded, 3 newly installed, 1 to remove and 15 not upgraded.
Need to get 136 kB of archives.
After this operation, 208 kB of additional disk space will be used.
Get:1 https://downloads.opennebula.io/repo/6.8/Ubuntu/22.04 stable/opennebula amd64 opennebula-node-kvm all 6.8.0-1 [11.9 kB]
Get:2 http://fr.archive.ubuntu.com/ubuntu jammy/universe amd64 libnbd0 amd64 1.10.5-1 [71.3 kB]
Get:3 http://fr.archive.ubuntu.com/ubuntu jammy/universe amd64 libnbd-bin amd64 1.10.5-1 [52.8 kB]
Fetched 136 kB in 1s (211 kB/s)  
(Reading database ... 139271 files and directories currently installed.)
Removing opennebula-node-lxc (6.8.0-1) ...
rmdir: failed to remove '/var/lib/lxc-one': Directory not empty
dpkg: error processing package opennebula-node-lxc (--remove):
 installed opennebula-node-lxc package post-removal script subprocess returned error exit status 1
dpkg: too many errors, stopping
Errors were encountered while processing:
 opennebula-node-lxc
Processing was halted because there were too many errors.
E: Sub-process /usr/bin/dpkg returned an error code (1)
$ 

Single host testbed for ONE makes a lot of sense when trying ONE KVM, LXC, Firecracker, for example.

It would ease a lot the evaluation work prior to ONE Open Cluster adoption. Open had better be ... open :)

Strongly support enabling this possibility.

kitatek commented 5 months ago

Partial, simple support would be perfectly OK as a first stage for evaluation only (e.g. requiring several names for the same IP address for example). A manual recipe would also work as an intermediate step to allow such testing / evaluation, until the solution eventually makes it as a supported case.