wetopi / docker-volume-rbd

Docker Engine managed plugin to manage RBD volumes.
MIT License
69 stars 18 forks source link

I have to perform an `rbd map` command on the host before this works #15

Closed byoungb closed 2 years ago

byoungb commented 4 years ago

I have no idea what is going on but until I issue the rbd map command I get

[byoungb@ceph4 ~]$ docker volume create -d wetopi/rbd -o size=206 ceph4
Error response from daemon: create ceph4: VolumeDriver.Create: volume-rbd Name=ceph4 Request=Create Message=unable to create ceph rbd image: exit status 2
[byoungb@ceph4 ~]$ docker volume ls
DRIVER              VOLUME NAME
wetopi/rbd:latest   ceph1
wetopi/rbd:latest   ceph2
wetopi/rbd:latest   ceph3
[byoungb@ceph4 ~]$ docker run -ti --rm -v ceph1:/data centos:7
docker: Error response from daemon: error while mounting volume '/var/lib/docker/plugins/052d7af796066c3007d51fbc5332cec04c8b7022a16cf03a5a99a508ac849972/rootfs': VolumeDriver.Mount: volume-rbd Name=ceph1 Request=Mount Message= unable to map: ceph1%!(EXTRA *exec.ExitError=exit status 2).
[byoungb@ceph4 ~]$ rbd create foo --size=128
[byoungb@ceph4 ~]$ sudo rbd map foo
[sudo] password for byoungb: 
/dev/rbd0
[byoungb@ceph4 ~]$ docker volume create -d wetopi/rbd -o size=206 ceph4
ceph4
[byoungb@ceph4 ~]$ docker run -ti --rm -v ceph1:/data centos:7
[root@a6eb5e08866c /]# df -h
Filesystem      Size  Used Avail Use% Mounted on
overlay          77G  3.7G   73G   5% /
tmpfs            64M     0   64M   0% /dev
tmpfs            16G     0   16G   0% /sys/fs/cgroup
shm              64M     0   64M   0% /dev/shm
/dev/rbd1       196M  1.8M  180M   1% /data
/dev/md126       77G  3.7G   73G   5% /etc/hosts
tmpfs            16G     0   16G   0% /proc/acpi
tmpfs            16G     0   16G   0% /proc/scsi
tmpfs            16G     0   16G   0% /sys/firmware
[root@a6eb5e08866c /]# exit

Yeah so as you can see above, you can list the volumes and see rbd volumes but you cannot create them or mount them until you have run the rbd map command at least once on the host. (I was able to confirm this with 100% certainty because I had 3 attempts at figuring it out, everything worked fine on ceph1 basically right away, so it was really hard to figure out for awhile)

box-daxter commented 4 years ago

@byoungb Ensure the rbd module is loaded on startup. Ensure you have the line "rbd" on the file #cat /etc/modules

If not: echo "rbd" >> /etc/modules

Regards

psukys commented 4 years ago

While adding rbd to /etc/modules didn't solve the problem, configuring rbdmap - did.

byoungb commented 4 years ago

What exactly did you configure with rdbmap??? or did you just enable the systemd service? Since the rbd maps are created by the docker volume plugin, I am not exactly sure what I would need to configure.

sitamet commented 2 years ago

this docker volume plugin needs the rbd kernel loaded and configured in your host.

we ensure this module is loaded adding it to the /etf/modules:

lorem@host:# cat /etc/modules
# /etc/modules: kernel modules to load at boot time.
#
# This file contains the names of kernel modules that should be loaded
# at boot time, one per line. Lines beginning with "#" are ignored.
dummy
rbd