Azure / cyclecloud-slurm

Azure CycleCloud project to enable users to create, configure, and use Slurm HPC clusters.
MIT License
58 stars 43 forks source link

additional support for RHEL clones (almalinux, rockylinux) #76

Closed tbugfinder closed 5 months ago

tbugfinder commented 2 years ago

Hi,

as new RHEL - clones take over CentOS lack of bug-compatibility I'd consider AlmaLinux and Rocky Linux as major successor platforms. Currently this repo only supports centos and rhel as platform strings but should support others, too - especially as those are bug-compatible builds. I understand though, that current CycleCloud support is limited to ubunto, centos, rhel.

Thanks

abdkha commented 9 months ago

RockyLinux and RHEL are the only supported Images if we want to use Intel MPI: https://www.intel.com/content/www/us/en/developer/articles/system-requirements/mpi-library-system-requirements.html

Because official support is not found. I am hesitant to use these cyclecloud hpc images in my environment

aditigaur4 commented 9 months ago

We do currently support almalinux 8 images. RHEL rpms we build work on almalinux, that in fact is one the HPC images cyclecloud is defaulted to use.

aditigaur4 commented 9 months ago

Can you please mention what you are trying to do? and what problems you are encountering?

abdkha commented 9 months ago

Hi there!

I have an Ansys Fluent job that is using the slurm scheduler. It creates a 3d modeling simulation using hpc nodes. In my batch script, if I'm running the modeling on one node then it runs completely fine.

However, if I run try to split the job to run parallelly on two hpc nodes. Then I'm encountering this error:

[mpiexec@ip-0AA31225] check_exit_codes (../../../../../src/pm/i_hydra/libhydra/demux/hydra_demux_poll.c:117): unable to run bstrap_proxy on ip-0AA31228 (pid 10632, exit code 65280) [mpiexec@ip-0AA31225] poll_for_event (../../../../../src/pm/i_hydra/libhydra/demux/hydra_demux_poll.c:159): check exit codes error [mpiexec@ip-0AA31225] HYD_dmx_poll_wait_for_proxy_event (../../../../../src/pm/i_hydra/libhydra/demux/hydra_demux_poll.c:212): poll for event error [mpiexec@ip-0AA31225] HYD_bstrap_setup (../../../../../src/pm/i_hydra/libhydra/bstrap/src/intel/i_hydra_bstrap.c:1062): error waiting for event [mpiexec@ip-0AA31225] HYD_print_bstrap_setup_error_message (../../../../../src/pm/i_hydra/mpiexec/intel/i_mpiexec.c:1014): error setting up the bootstrap proxies [mpiexec@ip-0AA31225] Possible reasons: [mpiexec@ip-0AA31225] 1. Host is unavailable. Please check that all hosts are available. [mpiexec@ip-0AA31225] 2. Cannot launch hydra_bstrap_proxy or it crashed on one of the hosts. Make sure hydra_bstrap_proxy is available on all hosts and it has right permissions. [mpiexec@ip-0AA31225] 3. Firewall refused connection. Check that enough ports are allowed in the firewall and specify them with the I_MPI_PORT_RANGE variable. [mpiexec@ip-0AA31225] 4. Ssh bootstrap cannot launch processes on remote host. Make sure that passwordless ssh connection is established across compute hosts. [mpiexec@ip-0AA31225] You may try using -bootstrap option to select alternative launcher.

After doing some research I'm under the impression that Intel MPI isn't working quite right because alma isn't fully supported for Intel's Infiniband

aditigaur4 commented 5 months ago

Please let us know if you need something from us on this. We have new alma images released recently, so please check if the problem persists. Closing this until then, reopen if needed.