Closed sergiodj closed 9 months ago
I talked with @mfoliveira about this issue, and we devised a plan which may or may not be the right way to address this issue. Please let me know if you see any problems with the approach described below.
The idea is to implement a variant of the -u
option on multipath
which would be responsible to check whether a certain path is valid, but only communicate with the multipathd
is it is. This way, we could invoke multipath
from the udev rule using this option, which would actually make the socket activation work as expected. Based on our discussions, it doesn't seem that the daemon is actually needed to determine if a certain path is valid...? Or maybe we're missing something.
Feedback is more than welcome.
The idea is to implement a variant of the
-u
option onmultipath
which would be responsible to check whether a certain path is valid, but only communicate with themultipathd
is it is.
I am afraid this isn't going to work. We ask the daemon for a reason. Determining whether any given device should be part of a multipath map is, unfortunately, a very hard problem, and the maintainers of multipath-tools have put a lot of effort in making it work as reliably as possible in various scenarios. multipathd is a stateful daemon that knows not only the current uevent and the current list of devices, but also has some record of previous events and internal configuration. By asking the daemon, we avoid having a different view of the system between multipathd and the multipath -u
instance running in the udev worker context.
That said, recent multipath-tools versions assume that on systems with multipath hardware, multipathd is started early during boot, anyway. Socket activation arguably doesn't make much sense in a setup like that. In a way, socket activation has never made much sense for multipathd. It's a daemon that must be running in order to do what it's supposed to do. Socket activation for multipathd is mostly intended for "playgrounds", where users boot without multipath and add SAN devices at a later time. That's usually just a temporary state during system setup. As soon as the SAN devices are active, users will most probably want them to be ready after booting, which means enabling multipathd.service
.
Thinking about it, I am open for removing socket activation for multipathd altogether. Its usefulness is very limited, IMO. @bmarzins may have a different opinion. If we do away with socket activation, multipathd would be running if and only if the multipathd.service
was enabled, which is what many users would expect. The only drawback being that users on "playgrounds" would now need to start the service manually.
OTOH, if a system is known not to use dm-multipath, why would it have multipath-tools installed in the first place? The easiest way to avoid autostarting of multipathd would be to deinstall it. Alternatively, the kernel command line parameter multipath=off
can be used, or the user can mask multipathd.service
.
OTOH, if a system is known not to use dm-multipath, why would it have multipath-tools installed in the first place?
Reading through the Debian issue, I realize that this is about the state after installation, and the default image. We have the same problem in (open)SUSE. We want to provide multipath support for users that need it, and thus include multipath-tools in the base image. In a traditional installation, it might be possible to just install multipath-tools if multipath devices were encountered during installation. But if ready-built images are deployed, that option might not be available.
I'm afraid the best solution I can offer here (upstream) is to disable socket activation for multipath. It feels kind of backward, but as argued above, it makes sense for a service like this that is expected to be continuously running when it's needed.
I don't feel very strongly about the existence of multipathd.socket. Removing the socket, along with our removing the (admittedly imperfect) code that checked if multipathd was supposed to be started by systemd does feel like it's removing some guardrails that kept things from going badly in some unlikely corner cases. This makes me a little nervous. But in general, multipathd needs to be always running for things work correctly. These guardrails only protected people in a limited set of cases, so they probably aren't all that helpful.
Just as an aside, Red Hat distributions have always solved this by not starting multipathd and automatically assuming all devices are blacklisted if there is no /etc/multipath.conf. I understand why my patch to do this has not been accepted upstream. Its a disruptive change for distributions to make. But it allows us to ship the tools everywhere, without worrying about mutipathd being started unnecessarily. The patch is here in case anyone is interested
Thank you very much for the thoughtful replies, @mwilck and @bmarzins.
@mwilck, as you've correctly noticed, this whole conversation has to do with the fact that multipath-tools is part of Ubuntu's base image, but we would like to prevent the service from starting if the user doesn't have any multipath devices in order to save a little bit of resources. This is a request that came up from users who are using Ubuntu in an environment where it doesn't make sense to have multipath devices, and where having a little bit more of RAM available is always welcome, like a raspberry pi.
Thanks also for letting me know that the idea I described won't work. I did sound like a long shot, but I thought it was worth trying.
I've now been considering another option. Inspired by libvirt, which has a timeout parameter that can be used to stop the service if nothing happens after a certain period of time, I thought about implement a similar approach to multipath-tools. Not a timeout per se, but maybe something like "start the daemon, check if there are any valid multipath devices, and if there aren't just stop the daemon". Would that be a reasonable approach to the problem? That could be implemented as a new command line option that would be used together with -u
in the udev rule.
@bmarzins yeah, I'd noticed that Red Hat uses the approach you've mentioned. I even considered the same for Debian/Ubuntu, but I'm not comfortable implementing such a change in behaviour there.
@sergiodj, what's wrong with simply removing the socket? Too simple? :grin:
I have pondered the "timeout" approach, too, but IMO it fits libvirt better than multipathd, and I don't see the benefit compared to removing socket activation and switching multipathd on and of simply using systemctl enable
and systemctl disable
.
Wrt multipath.conf
, that wouldn't work for SUSE because we haven't been shipping a conf file for years, multipathd runs with empty configuration by default.
@sergiodj, what's wrong with simply removing the socket? Too simple? 😁
Not at all! :-D Based on what you described, I am also a +1 to remove the socket. It causes confusion among users and distribution folks, as represented by this bug. I see my suggestion more like an orthogonal solution to the problem of having multipathd.service
active when there are no multipath devices in the system, but taking into account the fact that socket activation doesn't apply.
I have pondered the "timeout" approach, too, but IMO it fits libvirt better than multipathd, and I don't see the benefit compared to removing socket activation and switching multipathd on and of simply using
systemctl enable
andsystemctl disable
.
There's a bit of a problem with using this approach on Debian/Ubuntu. multipath-tools is part of the base image in Ubuntu, and any service installed in the system needs to be started and ready to be used by default (which means that it will be enabled and started). systemctl disable
works fine, but this means that the user needs to manually do it. Many users don't realize that they have the service installed and consuming resources.
So you can't enable/disable multipathd based on the hardware that was detected during installation?
While I understand that having services running that you don't need is annoying, multipathd doesn't use a lot of resources when it's idle, and those users who can't stand it can easily just disable the service. IMHO that should be acceptible. OTOH, you can consider leaving the service disabled by default and telling people with multipath hardware to enable it either during or after installation[^suse]. The fraction of users that actually need it is arguably rather low, and people who do need it are usually aware of multipathd, while those who don't might be totally ignorant about it.
That said, if we follow the "disable socket activation" approach, do we want to completely eliminate the socket unit and multipathd's support for socket activation, or do we just want to to leave the socket unit disabled by default? The 2nd option would allow users who do want socket activation to re-enable it easily. @bmarzins, your opinion?
[^suse]: On SUSE distros, the installer runs multipath -d
to see if any multipath hardware is present, and if yes, asks the user whether to enable multipath. If the user confirms, multipathd will run both during installation and afterwards.
It is possible to query for multipath devices during installation and decide whether the service should be active, but that poses two problems for us. First, we try to minimize the number of questions asked during package installation. Second, multipath-tools is already part of the base image in Ubuntu, so in this case there would be no prompt. Perhaps the installer could be extended to do this query, but I don't want to get into that rabbit hole :-).
I agree with your description of the problem: this is likely just a corner case and the vast majority of users don't really care about the ~20MB of RAM that multipathd consumes when it's doing nothing. I guess my intention here is to address this problem using the least intrusive way possible (by having multipathd accept an option that tells it to shutdown if there's no multipath device found), but I'm fine if the final upstream opinion here is that this is not worth the trouble. We'll probably just keep things as is in that case.
So you can't enable/disable multipathd based on the hardware that was detected during installation?
One goal is that if you happen to install multipath hardware after system installation, that you don't have to install new packages to get it working. So the ideal scenario would be that multipath is installed, but only active if devices are detected and used, and this can happen at at anytime after the system was first installed.
multipathd doesn't use a lot of resources when it's idle
Correct, but there is a density story here as well. I don't know if multipath is useful in a VM, but imagine it running by default on hundreds of VMs on a server, and it adds up.
and those users who can't stand it can easily just disable the service
They can, but it's an extra step. I'm experienced in linux, but even I don't know off hand what many of the desktop-related services are running by default are "safe" to disable. I imagine the same can be true for server admins, albeit they should be more careful with what they are running, of course. If we could make this automatic, and safe, it would be a nicer story.
I agree with your description of the problem: this is likely just a corner case and the vast majority of users don't really care about the ~20MB of RAM that multipathd consumes when it's doing nothing.
Well, 20MB is quite a bit. I, too, have grown up with computers having 64kB RAM and less :-)
I suppose this is because multipathd locks its memory. It needs to do that because it needs to continue running even if no disk IO is temporarily possible. That's something we need to look at. I don't think we can avoid the locking, but maybe we can reduce multipathd's memory footprint. We almost certainly can, I know for sure that multipathd's memory usage is less than optimal. The question is whether it would be worth the effort. A lot of this memory might actually be used by shared libraries, and won't go away no matter how much we optimize.
I guess my intention here is to address this problem using the least intrusive way possible (by having multipathd accept an option that tells it to shutdown if there's no multipath device found), but I'm fine if the final upstream opinion here is that this is not worth the trouble. We'll probably just keep things as is in that case.
The crux is to determine how long to wait until multipathd makes this decision. Booting can take a long time. Depending on system configuration, even if there is multipath hardware, it might not be seen as such by multipathd before the admin changes the configuration (adds WWIDs, or changes the find_multipaths
option). We certainly don't want to introduce a new type of boot failure here, caused by multipathd exiting prematurely.
multipathd doesn't use a lot of resources when it's idle
Correct, but there is a density story here as well. I don't know if multipath is useful in a VM, but imagine it running by default on hundreds of VMs on a server, and it adds up.
You can configure multipath in VMs. It usually makes no sense for volumes that are present on the host, but it does e.g. with iSCSI.
and those users who can't stand it can easily just disable the service
They can, but it's an extra step. I'm experienced in linux, but even I don't know off hand what many of the desktop-related services are running by default are "safe" to disable. I imagine the same can be true for server admins, albeit they should be more careful with what they are running, of course. If we could make this automatic, and safe, it would be a nicer story.
I understand. I have no problem with commucating clearly that you don't need to have multipathd running, enabled, or even installed unless you are using dm-multipath (which I am assuming admins are aware of if they do). Perhaps that would reduce the fear of disabling it.
Wrt the "extra step", I assume that data center admins use someting like ansible, where disabling this service would be just one more directive in a probably long list of steps to take. But I can't speak for everyone, of course.
Here is an idea that might work: instead of a timeout, we could create a new CLI command for multipathd, "shutdown-if-idle" or the like. When this command is received, multipathd would check if it has found any maps, and exit otherwise. The "shutdown-if-idle" command could be run from a system service that is started late in the boot sequence, shortly before or even after reaching default.target. This command shouldn't be used in the initramfs.
I'm open for receiving patches ... :smile:
Your comment made me realize that perhaps I wasn't very clear when I proposed the solution. I only mentioned libvirt's timeout approach as an example, but what I really wanted to implement for multipathd was check if there are any valid multipaths, and exit if there are none
.
I think such an option could be tightly coupled with the -u
option, and that we could use it in the udev rule because that's where multipathd is being started during boot time.
what I really wanted to implement for multipathd was
check if there are any valid multipaths, and exit if there are none
.
Sure, I understood that. The problem is, how long do you wait for such devices to appear? There's no guarantee whatsoever that they'll be seen during early boot.
Therefore my suggestion to run "shutdown-if-idle
" around the time default.target
is reached, triggered by systemd. If at that point in time no multipath maps have been set up, it's at least not unlikely that there are none. But multipathd needs this external trigger. I can't think of any other means how it would be able to determine that the system is "fully up". You definitely can't "check if there are any valid multipaths" before network-online.target
is reached.
I think such an option could be tightly coupled with the
-u
option, and that we could use it in the udev rule because that's where multipathd is being started during boot time.
No, once more, that won't work. At least I don't see how it could. Udev rules are executed during early boot. We have no clue at that point in time whether or not any multipath maps are going to be discovered.
Perhaps a combined/two-stage approach, with two points to possibly stop multipathd, i.e., keep it running only if multipath maps have been set up in either stage?
1) early boot: multipathd is started for early boot and may stop at default.target
(via external call to shutdown-if-idle
)
2) post boot: multipathd may start (via socket activation in udev rules) and may stop itself (via internal call to shutdown-if-idle
)
Thanks!
What do you need 2.) for?
What do you need 2.) for?
In case devices are added after boot, but may or may not be individual paths to a multipath device.
For example, users can attach volumes to their openstack instances (or hypervisors too?) after boot. I seem to recall usage of that may be either single-path per volume or multi-path per volume. So one can't really know in advance, but only when (and if) additional paths are added.
The idea of 2.) being, if only single-path devices are added after boot (and no multipath maps are set up), then multipathd can shutdown and remain stopped, not consuming resources.
Just to be clear, the idea is that mulitpath -u wouldn't check if multipathd is running until it knows that the path would be claimed by multipath if multipathd was running, and then relies on socket activation being set up to start multipathd when it checked if it was running, correct? It would work, but then we're doing a bunch of useless work on every add or change event for the vast majority of people, who aren't using multipath. It also does kind of annoying things to libmpathvalid. The current calls in that library would now return results provisional on multipathd being active.
If we did something like this, I'd prefer is it was configurable whether we checked early or late for multipathd running, but obviously you can't use multipath.conf for that, since the goal is to run without one, unless you want to always distributions that use this to always install a multipath.conf file with this option enabled. Otherwise it could be a multipath command option. Then distributions would just need to modify the multipath udev rules file to call multipath -u with an additional option.
I seem to recall usage of that may be either single-path per volume or multi-path per volume. So one can't really know in advance, but only when (and if) additional paths are added.
Right. This is the definition of our find_multipaths smart
algorithm. It needs multipathd to be running. People who intend to do add hardware in this way should make sure multipathd is running.
Trying to automate this is over-engineered.
default.target
is reached, and want to add devices later on. Typically, this will happen only once in the life time of a server. After the initial confguration, the devices will be made available during boot (goto 2.).
If the added devices aren't multipath devices, these users will be fine, too.stop-multipathd-if-idle.service
in the first place (this is what I'd recommend).We are now discussing automation of 4. to avoid 5. Sorry, this looks like a prime example of a corner case to me.
Just to be clear, the idea is that mulitpath -u wouldn't check if multipathd is running until it knows that the path would be claimed by multipath if multipathd was running, and then relies on socket activation being set up to start multipathd when it checked if it was running, correct?
Am I the only one to whom this sounds convoluted?
Repeat, IMO it is impossible that multipath -u would reliably know whether "the path would be claimed by multipath if multipathd was running" without querying multipathd for this information. We have worked hard to establish this logic. Keywords are "find_multipaths smart" and "is_failed_wwid", and possibly more that doesn't come to my mind just now.
All this said, I am willing to review patches, but I am unlikely to ack over-complicated solutions to a corner case problem. I suggest we start with disabling the socket, and move on to the "stop-if-idle". Once that works, we can start fixing the corner cases, or see if we have better things to work on.
Repeat, IMO it is impossible that multipath -u would reliably know whether "the path would be claimed by multipath if multipathd was running" without querying multipathd for this information.
Maybe we misunderstood, and @mfoliveira meant to re-enable socket activation somehow after the first instance of multipathd has quit. I have no clue how to implement that race-free in systemd, but perhaps someone will come up with a scheme that achieves this.
After re-enabling the socket, every multipath -u
invocation in the rules would potentially start multipathd. This would happen on every "add" or "change" uevent for most block devices, and there can be LOTS of those. If no multipath devices were detected in this situation, multipathd would sit idle for a while and then kill itself via some "internal invocation of shutdown-if-idle
" (I suppose that would be controlled by a timeout), to be started again when the next uevent arrives.
multipathd startup is resource-intensive, because it needs to read all block devices and all dm devices in the system. Therefore this attempt to save resources might actually have the opposite effect.
So, that alternative also doesn't look promising to me.
Hi,
I'm working on implementing real socket activation for
multipathd.service
on Debian/Ubuntu, and I'm having problems with how the current60-multipath.rules
causes the socket to trigger the service start even when there's no multipath device in the system.The problem seems to happen because the udev rules will invoke
multipath
, like in:https://github.com/opensvc/multipath-tools/blob/f3004b45e7f8266a3f00b146d4742821d04b7940/multipath/multipath.rules.in#L32-L35
This will cause
multipathd.socket
to be notified, which finally makes the service start. As I said, it doesn't matter if there is a multipath device in the system: the service will always start.I'm thinking about possible ways to overcome this limitation, but it would be extremely helpful to hear your opinions as well.
Thanks in advance.