retspen / webvirtmgr

WebVirtMgr panel for manage virtual machine
http://retspen.github.io
2.04k stars 536 forks source link

Feature Request: Support for Ceph/rbd storage #312

Open ITBlogger opened 10 years ago

ITBlogger commented 10 years ago

Hi, we're currently in the process of moving to Ceph as our VM image storage. What are the chances of getting that added as a supported storage type?

Thanks,

Alex

retspen commented 10 years ago

Hello,

WebVirtMgr doesn't support creating and managing Ceph storage pool but you can manage VM's with Ceph image.

jsknnr commented 10 years ago

retspen,

It would be awesome if we could get this support. I know that WebVirtMgr currently doesn't support it. But if you could add support for RBD storage pools that would be fantastic! The support is already there in libvirt today. This would allow us to use Ceph storage pools with WebVirtMgr through the RBD support in libvirt.

http://libvirt.org/storage.html#StorageBackendRBD

EmbeddedAndroid commented 10 years ago

I've been working on adding something like this using the ceph-rest-api

screen shot 2014-06-11 at 10 52 08 am

screen shot 2014-06-11 at 10 52 56 am

screen shot 2014-06-11 at 10 53 05 am

Would there be any interest in having features like this upstream in the codebase? It's very basic right now, it can monitor the ceph cluster health. I plan to extend these features for more control.

jsknnr commented 10 years ago

That would be awesome.

primechuck commented 10 years ago

That would be a fantastic add. Mainly the adding and removing of RBDs into libvirt from the UI.

MACscr commented 10 years ago

This would be a great feature!

EmbeddedAndroid commented 10 years ago

Once I get back from traveling I'll start to focus on these changes. It would be nice if anyone interested could help test. I only have a single ceph cluster, and would want to ensure it works properly with multiple clusters.

MACscr commented 10 years ago

I will have a ceph cluster shortly that I can use to help with the testing.

retspen commented 10 years ago

Playbook for fast deploy ceph in vagrant - https://github.com/ceph/ceph-ansible

nlgordon commented 10 years ago

I'm getting your code setup in my home and work test environments so I can help build out the rbd backed volumes. We have a test rack at work where we have been using cephfs for libvirt, but it has its limitations.

retspen commented 10 years ago

I have added support rbd storage pool (creating, deleting volumes), after success testing I'll push in master brunch it.

MACscr commented 10 years ago

And these rbd volumes will be automatically created on kvm instance creation?

On Jun 16, 2014, at 1:04 PM, Anatoliy Guskov notifications@github.com wrote:

I have added support rbd storage pool (creating, deleting volumes), after success testing I'll push in master brunch it.

— Reply to this email directly or view it on GitHub.

retspen commented 10 years ago

I added app Secrets - f933d8f2942a7ccb2a79a55d1ecf6541e95073c4. Others future function (ceph storage pool) is testing

retspen commented 10 years ago

Support Ceph storage pool - 1d424d77ea86caf111d0f002f9875965633df4f3

primechuck commented 10 years ago

Adding the storage pool worked fine. When a VM was created, it didn't generate the correct libvirt settings for the pool. This was the disk entry it made.

<disk type='file' device='disk'>
  <driver name='qemu' type='raw'/>
  <source file='secondary-libvirt/RBDTest'/>
  <target dev='vda' bus='virtio'/>
  <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</disk>
retspen commented 10 years ago

After success testing I'll add part for creating VM with ceph

EmbeddedAndroid commented 10 years ago

I've added the storage pool as well. Looks good to me. I will add some health monitoring stats to complement this. Thanks @retspen!

ITBlogger commented 10 years ago

On CentOS the current setup won't work as virt-manager/libvirt does not support RBD pools in CentOS 6.5.

To work around this, I am having to create ceph volumes using qemu-img create, build up an xml template for storage image and use virsh attach-device to attach the rbd storage image to the vm.

Also, Ceph images should always be in RAW format, per the Ceph documentation.

What Puppet runs to do this:

qemu-img create -f raw rbd:(rbd-pool-name)/(rbd image name) (image capacity) ex: qemu-img create -f raw rbd libvirt/test1 80G

virt-install --name (vm-name) --ram (ram size) --vcpus (# of CPUs) --nodisks --description (desc) --network bridge=(virtnet),mac=(virtmac),model=(virtnic) --graphics vnc,listen=0.0.0.0 --os-type (virtostype) --os-variant (virtosvariant) --virt-type (virttype) --autostart --pxe ex: virt-install --name test1 --ram 1024 --vcpus 1 --nodisks --description 'test vm' --network bridge=br0,mac=52:54:00:82:a1:a1,model=virtio --graphics vnc,listen=0.0.0.0 --os-type linux --os-variant virtio26 --virt-type kvm --autostart --pxe

ERB template for network disk xml:

$ltdisk type='network' device='disk'> $ltdriver name='qemu' type='raw'/> $ltauth username='$lt%= @auth_user %>'> $ltsecret type='$lt%= @secret_type %>' usage='<%= @secret_usage %>'/> $lt/auth> $ltsource protocol='$lt%= @virtproto %>' name='$lt%= @pool %>/$lt%= @vmname %>'> $lthost name='$lt%= @volhost %>' port='$lt%= @volport %>'/> $lt/source> $lttarget dev='$lt%= @targetdev %>' bus='virtio'/> $lt/disk>

Example created xml: $ltdisk type='network' device='disk'> $ltdriver name='qemu' type='raw'/> $ltauth username='admin'> $ltsecret type='ceph' usage='ceph_admin'/> $lt/auth> $ltsource protocol='rbd' name='libvirt/test1'> $lthost name='cephrbd' port='6789'/> $lt/source> $lttarget dev='vda' bus='virtio'/> $lt/disk>

virsh attach-device (vmname) (path to xml) --persistent ex: virsh attach-device test1 /tmp/test1_rbd_virtdisk.xml --persistent

virsh start (vmname) ex: virsh start test1

ITBlogger commented 10 years ago

By the way, the error that you get when trying to make an RBD pool on an OS that doesn't support them is "internal error missing backend for pool type 8"

retspen commented 10 years ago

On Ubuntu - 14.04, Fedora - 20, RHEL - 7 - works fine.

retspen commented 10 years ago

Do you have problem when remove image in rdb pool?

primechuck commented 10 years ago

On Ubuntu 14.04 I was able to create the pool, create a few images using the UI in the pool, delete the images and delete the pool.

Deleting one of the images took about 5 minutes because it was 4TB and the UI was happy waiting for the libvirt command to complete.

ITBlogger commented 10 years ago

Unfortunately, we are only certified to use Centos 6.x.

primechuck commented 10 years ago

You'll need to update libvirt/quemu in order to use RBD support. I don't remember the minimal version, but ceph has install instructions to get the new package for RPM distros.

http://ceph.com/docs/master/install/install-vm-cloud/#install-qemu

ITBlogger commented 10 years ago

Yes, I have all that. Ceph does not supply updated packages for libvirt, only QEMU.

retspen commented 10 years ago

Create VM with rbd storage pool - 1a34115ddd349bce5965c192965a6066bd8f349e

MACscr commented 10 years ago

Could someone write up a small article in the wiki about what ll we need setup on the ceph nodes for webvirtmgr to communicate with it and while your at it and appears to be related, what these "secrets" are all about?

MACscr commented 10 years ago

When I try to add a secret, it says "please match requested format", but i have no idea what that format is and i have simply pasted in the cephx key that was created for the ceph user. I checked the forms.py file in the secrets folder and the only limitation i see in there is 100 characters and I am only using 40. Suggestions?

retspen commented 10 years ago

https://ceph.com/docs/master/rbd/libvirt/

elg commented 10 years ago

MACsrc, On Debian, I got some issues with the packaged qemu verision (wihtout rbd support). I recompile kvm and qemu using this help: http://cephnotes.ksperis.com/blog/2013/09/12/using-ceph-rbd-with-libvirt-on-debian-wheezy and debian/rules for qemu. After this qemu-img is able to use ceph directly (and my secret appears in webvirtmgr).

Anyway, I now have an issue trying to create the storage through rdb. I obtain this error: " internal error unknown storage pool type rbd " And I can't find the source of it. Any clue?

barryorourke commented 10 years ago

You'll need to recompile libvirt to support RBD storage pools, it's pretty easy on SL6 so hopefully should be on Debian too.

elg commented 10 years ago

Yes thx I figured that out by myself and it was quite easy on the libvirt from backports. Unfortunately this version has a bug and segfault when you try to access to a rbd pool.

I'm done with recompilations and backports: I'll give a try to an ubuntu for my host.

MACscr commented 10 years ago

I just re provisioned my debian cluster with Ubuntu for the same reasons.

retspen commented 10 years ago

Ubuntu 14.04 - host server (libvirt support rbd storage). Ubuntu 12.04 or Debian 6 - ceph cluster.

primechuck commented 10 years ago

Have anyone else been running into this bug related to this feature? http://comments.gmane.org/gmane.comp.emulators.libvirt/96702 It looks like volumes created using libvirt are broken in versions > 1.2.4

samuelet commented 9 years ago

It is not broken with versions > 1.2.4 but with versions < 1.2.4, indeed i'm running into this bug with ubuntu 14.04 and libvirt 1.2.2-0ubuntu13.1.5 .