Open nagilum99 opened 7 months ago
If Citrix Hypervisor backports the change to 8.2 CU1, it will bounce into XCP-ng 8.2.1 too at some point.
However, check the instructions in multipath.conf
: you can add any configuration you need yourself now, provided you do it in the right file, as described.
Hey @nagilum99 did you find a solution here?
I have a MSA 2060 i'd like to properly connect.
@Petbotson Hi,
Just use the vendor configuration, follow the instructions in the multipath.conf
. Let us know if you need help.
Hey @nagilum99 did you find a solution here?
I have a MSA 2060 i'd like to properly connect.
I updated it myself, as temporary fix. Works as it should.
So i'm not doing this everyday.
Should i just edit the file /etc/multipath/conf.d/custom.conf
and add the following from https://github.com/xapi-project/sm/blob/master/multipath/multipath.conf
device {
vendor "(HP|HPE)"
product "MSA [12]0[456]0 .*"
path_selector "round-robin 0"
hardware_handler "1 alua"
path_grouping_policy group_by_prio
prio alua
failback immediate
no_path_retry 18
}
Thanks a lot
If you open the multipath.conf
file, it explains how to do it in the header:
# --- WARNING: DO NOT EDIT THIS FILE ---
# The contents of this file may be overwritten at any future time through a
# system update, causing any custom configuration to be lost.
#
# For custom multipath configuration, create a separate .conf file in the
# /etc/multipath/conf.d/ directory.
# --- END OF WARNING ---
Great. So this means if i have multiple devices that need a custom multipath config, i can not just simply put the configuration in one file? But i need multiple files? storage1.conf and storage2.conf?
You can put it in one file, the multipath program will generate "one big" config from all the files. It's more a question of organization on your side. Both solutions are fine.
At some point, for all generic/massively used arrays, XCP-ng project will try to get it "upstream" so you don't need that.
Well for now i just created one file custom.conf
and added the following lines:
## MSA 2060 configuration
device {
vendor "(HP|HPE)"
product "MSA [12]0[456]0 .*"
path_selector "round-robin 0"
hardware_handler "1 alua"
path_grouping_policy group_by_prio
prio alua
failback immediate
no_path_retry 18
}
After rebooting the host, i checked with the command mpathutil status
and got the following result
mpathutil status
show topology
xxxxxxxxxxxxxxxxxxxxx dm-1 HPE ,MSA 2060 iSCSI
size=6.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| |- 10:0:0:1 sdae 65:224 active ready running
| |- 6:0:0:1 sdaa 65:160 active ready running
| |- 12:0:0:1 sdag 66:0 active ready running
| `- 8:0:0:1 sdac 65:192 active ready running
`-+- policy='round-robin 0' prio=10 status=enabled
|- 11:0:0:1 sdaf 65:240 active ready running
|- 7:0:0:1 sdab 65:176 active ready running
|- 13:0:0:1 sdah 66:16 active ready running
`- 9:0:0:1 sdad 65:208 active ready running
I think it`s working, because the MSA does have two controllers of which one is inactive.
When checking the PBD details though i don't see any mention of the multiSession parameter. Shouldn't it be there?
xe pbd-param-list uuid=78ca9439-3dba-b131-6924-875bab6e6af4
uuid ( RO) : 78ca9439-3dba-b131-6924-875bab6e6af4
host ( RO) [DEPRECATED]: f0e03a22-0f66-4ea9-8ead-b005fb3b5a11
host-uuid ( RO): f0e03a22-0f66-4ea9-8ead-b005fb3b5a11
host-name-label ( RO): host
sr-uuid ( RO): 0f25294d-4292-bedf-6f6c-48f9f62d319e
sr-name-label ( RO): MSA-STORAGE
device-config (MRO): multihomelist: 10.1.1.11:3260,10.1.2.14:3260,10.1.1.13:3260,10.1.2.11:3260,10.1.1.14:3260,10.1.2.13:3260,10.1.1.12:3260,10.1.2.12:3260; target: 10.1.1.11; targetIQN: iqn.2015-11.com.hpe:storage.msa2060.baksz2233f62b4b; SCSIid: 3600c0ff000f77196af43226601000000
currently-attached ( RO): true
other-config (MRW): storage_driver_domain: OpaqueRef:d4b309dc-98df-4f4b-9786-852a8b387982; mpath-3600c0ff000f77196af43226601000000: [8, 8]; multipathed: true; iscsi_sessions: 8
I'd call it like the config it contains - it makes management more easy. You can see that it handles the MSA, sees 8 targets and groups them between active (preferred) and enabled (working but passive/fallback) paths. Looks good to me. We're using FC instead, but at this point it's not a bit difference.
You can put it in one file, the multipath program will generate "one big" config from all the files. It's more a question of organization on your side. Both solutions are fine.
At some point, for all generic/massively used arrays, XCP-ng project will try to get it "upstream" so you don't need that.
As stated above: This one (and several others) are already in XAPI project, you just need to pull and update your own. ;-) ...adding it for 8.3 PR would be good (also for the points of: Features added
For contributions of default multipath.conf updates, it directly to the XAPI project: https://github.com/xapi-project/sm
Thank you everybody. Looking forward to seeing the latest config from the sm repo in xcp-ng
I created an issue in xapi-project and noticed, that it's probably wrong placed over there: https://github.com/xapi-project/sm/issues/682
_I've seen that multipath/multipath.conf got a few upgrades. https://github.com/xapi-project/sm/commit/d732cf1ae7e7914d4dde2dd6858bf0718afae44e is the one that would affect us to properly connect an MSA 2060 FC / SAN.
I'm running XCP-ng 8.2 with current updates, but that config update didn't make it yet.
# multipathd show config
doesn't show it yet, so when will it make it? The 2050 is rather old by now and even the 2060 isn't 'fresh'.(I've installed the sm-patch with CHV before and it also didn't appear under 8.2 CU1)_