wetopi / docker-volume-rbd

Docker Engine managed plugin to manage RBD volumes.
MIT License
69 stars 18 forks source link

VolumeDriver.Mount: volume-rbd Name=my_rbd_volume Request=Mount Message= unable to mount: exit status 1. #6

Closed teralype closed 2 years ago

teralype commented 5 years ago

Following the instructions on the README.md:

# docker volume create -d wetopi/rbd -o size=206 my_rbd_volume
Error response from daemon: create my_rbd_volume: VolumeDriver.Create: volume-rbd Name=my_rbd_volume Request=Create Message=unable to create ceph rbd image: exit status 6

I figured out that I need to add the following to the global section in /etc/ceph/ceph.conf

rbd default features = 3

Then, after restarting Docker, it works:

# docker volume create -d wetopi/rbd -o size=206 my_rbd_volume
my_rbd_volume

However, I can't start a Docker container using this volume:

# docker run -it -v my_rbd_volume:/data --volume-driver=wetopi/rbd busybox sh
docker: Error response from daemon: error while mounting volume '': VolumeDriver.Mount: volume-rbd Name=my_rbd_volume Request=Mount Message= unable to mount: exit status 1.

This happens in both Ubuntu 14.04 and Ubuntu 16.04 using Ceph 10.2.11. Any ideas?

box-daxter commented 5 years ago

Hi,

You are right, by default driver doesn't set the features of the image, you have to set your default image features by default on the ceph.conf.

When using the KernelRBD (krbd) mount client, it doesn't have support for several options, so the correct way to deal with it is just like you have done it.

rbd default features = 3

r3pek commented 5 years ago

@teralype still have this problem? I just bumped into it... :(

box-daxter commented 5 years ago

Hi @r3pek,

You have to default the image features in the cluster by default, due the kernel mounter doesn't support the features that CEPH set by defualt for the images.

You need add this lines to all you ceph.conf config files, normally placed at /etc/ceph/Ceph.conf [global] rbd_default_features = 3

Option2: You have the option to not default the features in the CEPH, and create every image manually with the features you want, unsing the CLI client, but i don't recommend it.

More info: http://docs.ceph.com/docs/jewel/man/8/rbd/

Regards.

r3pek commented 5 years ago

@box-daxter i created the images manually (using the dashboard) so it was not a features problem because the kernel could mount the images just fine.

box-daxter commented 5 years ago

Ok, assuming your rbd info looks like:

root@jump1:~# rbd info ssd/xxxxxxxxxxxxxxxxxxx
rbd image 'xxxxxxxxxxxxxxxxxxx':
        size 40GiB in 10240 objects
        order 22 (4MiB objects)
        block_name_prefix: rbd_data.231638433f680b
        format: 2
        features: layering, striping
        flags:
        create_timestamp: Tue Jan 12 19:42:07 2019
        stripe unit: 4MiB
        stripe count: 1

Then the issue, is in the rbd plugin installation. It's a little tricky. Drain the node and reinstall the whole driver.

Ensure you can manually mount the rbd image from bash in the faulty node.

rbd map & mount /dev/rbd0 /mnt/test.

r3pek commented 5 years ago

features: layering, exclusive-lock

and i did drain the node several times... :( (mount still works on host)

box-daxter commented 5 years ago

I'm not 100% sure, but if don't remember bad, exclusive-lock was not supported by the kernel today.

Ultima1252 commented 5 years ago

Not sure what ceph version you are on, but I was having the unable to mount exit 1 error on nautilus. This[1] fork adds support to nautilus which fixed the issue for me.

[1] https://github.com/Nenzyz/docker-volume-rbd

sitamet commented 4 years ago

New driver release 3.0.0 with Nautilus support thanks to @diurnalist