canonical / microceph

Ceph for a one-rack cluster and appliances
https://snapcraft.io/microceph
GNU Affero General Public License v3.0
193 stars 25 forks source link

microceph.rbd map #145

Open tomponline opened 1 year ago

tomponline commented 1 year ago

Hi,

Is it expected/understood that microceph.rbd map doesn't work from inside the snap package?

Even if the kernel modules are loaded externally before that is invoked.

sabaini commented 1 year ago

This is indeed a gap, thanks for bringing it up

sabaini commented 1 year ago

FTR

sudo microceph.rbd map bench/testvolume
sh: 1: /sbin/modinfo: Permission denied
sh: 1: /sbin/modprobe: Permission denied
rbd: failed to load rbd kernel module (126)
rbd: failed to add secret 'client.admin' to kernel
In some cases useful info is found in syslog - try "dmesg | tail".
rbd: map failed: (1) Operation not permitted
[Thu Jun 22 16:36:00 2023] audit: type=1400 audit(1687451760.860:159): apparmor="DENIED" operation="exec" profile="snap.microceph.rbd" name="/usr/bin/kmod" pid=9086 comm="sh" requested_mask="x" denied_mask="x" fsuid=0 ouid=0
[Thu Jun 22 16:36:00 2023] audit: type=1400 audit(1687451760.860:160): apparmor="DENIED" operation="exec" profile="snap.microceph.rbd" name="/usr/bin/kmod" pid=9089 comm="sh" requested_mask="x" denied_mask="x" fsuid=0 ouid=0
[Thu Jun 22 16:36:00 2023] audit: type=1326 audit(1687451760.860:161): auid=1000 uid=0 gid=0 ses=1 subj=snap.microceph.rbd pid=9044 comm="rbd" exe="/snap/microceph/451/bin/rbd" sig=0 arch=c000003e syscall=248 compat=0 ip=0x7fcc8fa78a3d code=0x50000
tomponline commented 1 year ago

Thanks!

sabaini commented 1 year ago

Just to briefly add

tomponline commented 1 year ago

Yes this is exactly what I was seeing after manually loading the kernel modules.

eedgar commented 10 months ago

is there a workaround for the rbd mapping? I have a corrupt rbd that I need to fsck

wolsen commented 10 months ago

is there a workaround for the rbd mapping? I have a corrupt rbd that I need to fsck

You can potentially use rbd-fuse but you'll probably need to use packages from the os instead of microceph (I don't believe it's included in the snap). I haven't used it myself, but I believe you would be able to mount the files as loopback devices.

https://docs.ceph.com/en/latest/man/8/rbd-fuse/

eedgar commented 8 months ago

this is a quick and dirty script with no real error checking .. but it may help others

lxc stop ${container} rbd export lxd/container${container} - >/backup/out DEVICE=$(losetup -f) losetup -f /backup/out fsck -y $DEVICE losetup -d $DEVICE rbd mv lxd/container${container} lxd/container_${container}old cat /backup/out|rbd import --dest-pool lxd - container${container} lxc start ${container} rm /backup/out there is the last rbd rm lxd/container_${container}_old part to handle if your happy that it recovered .. but I dont have that as part of my script yet

mlenkeit commented 4 months ago

For testing purposes, I worked around this by running snap install with the --devmode flag to effectively install microceph outside the Snap sandbox.

Strongly not recommended for productive usage!

tregubovav-dev commented 2 months ago

I face the same issue when I try to migrate lxd 5.21 with microceph (reef/stable) and cephfs storages to Incus. Incus distributes as regular debian package and runs outside of snap. Incus migration scrip fails with message like that:

Error: Failed to restore "vm-01": Failed to start instance "dns-01": Failed to run: rbd --id admin --cluster ceph --pool lxd map container_infra_dns-01: exit status 1 (rbd: warning: can't get image map information: (13) Permission denied
rbd: failed to add secret 'client.admin' to kernel
rbd: map failed: (1) Operation not permitted)

More details and logs: https://discuss.linuxcontainers.org/t/unable-to-migrate-lxd-5-21-with-microceph-to-incus-6-0/19714

tregubovav-dev commented 2 months ago

There is an workaround offered by Stéphane Graber:

I validated it and it works in my test environment.

usma0118 commented 1 month ago

+1 works.

There is an workaround offered by Stéphane Graber:

  • install ceph-common package
  • link /var/snap/microceph/current/conf/ceph.conf and /var/snap/microceph/current/conf/ceph.client.admin.keyring files to /etc/ceph/ directory

I validated it and it works in my test environment.