Closed mtkaczyk closed 1 year ago
@mwilck I can see you in recent contributors so I'm notifying you directly - SLES is affected, could you take a look?
@mtkaczyk, Please open a SUSE bug.
The tool policy is aggressive because by default it claims every nvme device.
It's true, we use this policy under SUSE. Other distributions are probably not affected.
The solution is indeed to use blacklisting, and set find_multipaths
option to yes
or smart
.
In your case I'd recommend to activate multipath after installation, which should "just work".
This is related to SUSE only, not an upstream issue.
Hello, I see the issue between multipathd and MD raid management. I have SLES 15 SP5 system configured with VROC RAID0 and native RAID0:
I determined that enabling
multipathd
causes that my arrays are not started after reboot:It seems to be caused by concurrency between mdadm and this daemon on startup, the daemon doesn't respect metadata on the drives.
Obviously, I'm able to fix the issue by blacklisting devices in multipath config. Another workaround is to force MD modules loading to initrd image, so the raid will be started earlier. The main problem is if multipathd is enabled during the installation then after reboot to new OS raids are broken. The tool policy is aggressive because by default it claims every nvme device.
Can we do something to change this behavior? Here some ideas:
Thanks, Mariusz