open-iscsi / tcmu-runner

A daemon that handles the userspace side of the LIO TCM-User backstore.
Apache License 2.0
189 stars 149 forks source link

Fix DBus for multiple independent handlers #15

Open agrover opened 8 years ago

agrover commented 8 years ago

Now that we're going down the libtcmu path for supporting handlers other than tcmu-runner, if we're going to present a coherent picture of the available handlers to the user then they need to cooperatively register in one place. We can use Telepathy "mission-control-5" as a model for how to accomplish this.

os12 commented 8 years ago

Hey Andy, I think the embedded cases (ie large applications linking libtcmu.a) are orthogonal to the tcmu-runner's system/dbus registration and interfaces. That is a system would have the tcmu-runner with its plugins or a larger application with libtcmu linked.

Here is how I see the embedded case:

agrover commented 8 years ago

Yes it's not relevant to the embedded case. It's relevant to the distro use case, and I think we can accommodate both.

famz commented 8 years ago

Does it make sense to handle the dbus communications in libtcmu?

agrover commented 8 years ago

I think so.. it's just a matter of figuring out how to do it, given that multiple processes could be using libtcmu, and only one entity can back a DBus service. I looked into how Telepathy does this, but got lost in all the code.

famz commented 8 years ago

I think we can extent tcmu-runner's org.kernel.TCMUService1 interface and let it be the mid-man coordinator between multiple handlers and targetcli.

Currently it only implements a CheckConfig method to verify the cfgstring, and provides object introspection for enumerating available subtypes, for targetcli. If we extent this service for libtcmu users as well, targetcli user interface is extended automatically. Specifically, we can add a "RegisterHandler" method (family) alongside CheckConfig, which accepts subtype string from libtcmu.

One more related feature I'd like to discuss is file descriptor passing.

In a typical virtualization setup, QEMU doesn't open files by itself, instead it gets FDs passed from libvirt, which opens image files on its behalf, via unix socket control messages. This is so because of privilege limitation for security concerns. In TCMU, QEMU process would need to open /sys/class/uio and /sys/device/tcm_user, which is not desired. A solution is let tcmu-runner who gets DBus messages do this, and pass it back to libtcmu through a unix socket. I think this is doable.

In the case of qemu-tcmu (a standalone TCMU handler utility) we may not need this, just like when qemu-img is invoked by libvirt, it has access to images etc, which kinda suggests we could run qemu-tcmu with the permissions to uio. But in my first take on TCMU support in QEMU, I would like to introduce an embedded handler, like the NBD server in qemu-system-* processes. Because: 1) this is easier to do and 2) this is easier to justify - it allows QMP which is the control path to manage the targets, and it enables virtualization use cases that virtual disk data is accessed in the host, through LIO fabrics, with SCSI commands, compared to which the current NBD protocol is poorer (see also the recent discussions around extending NBD command set to allow querying block allocation states in qemu-devel@nongnu.org).

What do you think, Andy?

Fam

agrover commented 8 years ago

I think extending DBus to allow libtcmu users to RegisterHandler with tcmu-runner is good, some design issues I have are deregistration, or what we do when another process supporting handlers goes away? I don't have an answer (and was hoping Telepathy's implementation did). And also, how would tcmu-runner handle proxying check_config calls, that sort of thing.

Regarding FD passing, I have no immediate objections, but maybe we can open a separate issue to discuss further, or when you have a PR?

famz commented 8 years ago

My idea is like this:

At the time of RegisterHandler, tcmu-runner <--> $FOO_HANDLER open an out-of-bound unix socket. The socket can be used by tcmu-runner to do check_config calls upon targetlib's DBus.CheckConfig call: tcmu-runner sends a check_config query to $FOO_HANDLER through the unix socket, and wait for reply. (We need a mini protocol here, which could be built on, for example, json-rpc.)

With that, the deregistration would be implicit, when the unix socket is closed (G_IO_HUP).

If this sounds okay, I can work on patches implementing it, and probably extend it to support FD passing, in the coming PR.

agrover commented 8 years ago

I feel like DBus should also be used here, instead of our own rpc mechanism. But if you'd like to focus on getting your stuff working and want to implement what you describe in the short term, that's fine, and I can worry about converting to dbus (I do believe dbus can pass fds, fwiw). This is an internal interface so this future change should not cause disruption.

famz commented 8 years ago

It is better if we can stick to dbus, but I get questions I cannot answer. I am not a DBus expert, but reading through the documents I couldn't find a way to do a callback from DBus service (tcmu-runner) to clients (handler process).

How would you implement check_config with DBus? And how would tcmu-runner know when the handler is killed?

I'd like to do some work here for long term, after all QEMU cannot rely on any internals of a daemon.

agrover commented 8 years ago

I'm not a dbus expert either, but I'll look more into doing what we want to do today, and ask the experts on the dbus mailing list as well.

agrover commented 8 years ago

Opened #40 to discuss the FD thing in a separate issue from the DBus stuff.

agrover commented 8 years ago

I think from looking at this if we have each user of libtcmu provide a distinct well-known bus name with the TCMUService1 API or something similar, then tcmu-runner can get signals when bus names show up or disappear, reply to the current API with the complete list of handlers, and proxy API calls like CheckConfig through to the specific handler instance.