Open hppritcha opened 2 weeks ago
Problematic: the issue here is that specifying an MPI for srun
will automatically make Slurm think that the daemons are MPI procs, which has implications for how they are run. What "mpi" option are you thinking of trying?
Bottom line is that the VNI allocation system is broken for indirect launch - been hearing that from other libraries. Only thing I can come up with is to find a non-srun solution, though I'm open to hearing how to get around it.
Problematic: the issue here is that specifying an MPI for
srun
will automatically make Slurm think that the daemons are MPI procs, which has implications for how they are run. What "mpi" option are you thinking of trying?
just not specifying anything about mpi.
I plan to open a PR to not insert this option into the srun cmd line.
An easy workaround for a user that finds this problematic will be to set
SLURM_MPI_TYPE=none
in their shell before using mpirun.
Ah, but it is necessary to have that option in non-HPE systems, especially when they set a default MPI type. You could wind up breaking all the non-HPE installations, and the HPE installations that have disabled VNI. Requiring everyone in those situations (which greatly outnumber those with Slingshot) to set a fix seems backwards to me. Perhaps finding a more generalized solution might be best?
Also, remember that Slurm now injects their own cmd line options, so need to figure out a solution that accounts for that as well.
Looking back, it appears we may have had to add this option to avoid having the daemon automatically bound, which then forced the procs it started to share that binding. Probably other options could also be used for that purpose. However, there may be additional reasons why we added it, so one might need some further investigation to be sure we don't cause problems.
The real issue isn't caused by the VNI itself - that's just an integer that is easily generated. The problem is the requirement that the VNI be "loaded" into CXI at privilege, which the PRRTE daemon isn't running at and thus is blocked from doing.
One solution is to create a setuid script that takes only one argument (the VNI) and executes the required operation at the CXI user's level. You might check and see if anyone has an issue with that, and what can be done to minimize any concerns. Ultimately, that's probably the correct solution - if one can make it acceptable.
@hppritcha what version of SLURM are you using on this machine that experiences the issue?
This came up on the PMIx call today, and I'm a bit lost on how --mpi=none in the Slurm PLM might be improving anything? The switch plugin in Slurm will kick in regardless, and should be setting up the VNIs.
The slurm PLM component sets
--mpi=none
as part of the srun command used to launch the prted daemons.On HPE Slingshot 11 networks where VNI credentials are enforced, this ends up in, effectively, a failure to launch for multi-node jobs.
Turning on FI_LOG_LEVEL=debug shows a characteristic signature for this:
This addition to the srun command line options for prted launch needs suppressed for systems using HPE Sling shot.