Open mdmeier opened 6 years ago
The fundamental problem appears to be with mapping rbd?
[root@cloud103-15 ~]# rbd nbd map --device /dev/ndb1 --nbds_max 64 RBD_XenStorage-2dd455e9-0de4-4ed8-af62-64e1a4ace678/VHD-50d28620-24b7-45f9-99f4-7f5ee0bc739e --name client.admin rbd-nbd: ignoring kernel module parameter options: nbd module already loaded rbd-nbd: failed to open device: /dev/ndb1 rbd: rbd-nbd failed with error: /usr/bin/rbd-nbd: exit status: 1
Hi Roman,
Long time no chat. I've recently undertaken to upgrade CEPH from kraken to luminous and have come across a strange problem. When migrating a VDI from another SR to CEPH I'm getting the following every second in /var/log/SMlog:
Which is odd, because from what I can tell, srlock should be replaced with an actual VDI ID? When I check locks I see:
Which is always another server. If I'm persistent enough I can "rbd lock remove" them so that this server catches the lock, but then I get:
Any chance you can help with this? I'm unable to create new VDIs on my CEPH SR right now because of this.