I noticed that when running Smilei on TGCC's Irene cluster and Rome partition (I haven't tried on other partitions of this cluster) (machine=joliot_curie_rome for compilation), in the Initializing MPI part of Smilei's log, this prints:
OpenMP task parallelization not activated
My colleagues at CELIA laboratory running Smilei on the same partition have the same print. I tried recompiling Smilei differently by removing config=no_mpi_tm from the make command. This enables MPI threads but OpenMP task parallelization remains not activated. Forcing a compiler option by setting export MPICXX='mpicxx -fopenmp' had no effect. Setting export OMP_PROC_BIND=true and export OMP_PLACES=cores had no effect either.
Thus, I have three questions for you please:
Question 1: I don't know how are MPI threads related to OpenMP. The machine file recommends not enabling the former for some reason. But I could compile the code successfully when removing config=no_mpi_tm. The instruction to add this option being 3 years old, maybe it is just no longer necessary? Is it even possible for OpenMP tasks to be effective when MPI threads are not enabled?
Question 2: At CELIA, we are also wondering how things work when we have multiple threads per mpi process (64 in my case) but OpenMP is not activated. Indeed, using several MPI threads still seems to run the code faster compared to using only one, despite the non-activation of OpenMP task parallelization.
Question 3: Last but not least, do you have any hint how to solve this issue and enable OpenMP please?
Additionnal infos:
Smilei version : 5.1-34-g60be16288-master
I have no idea of what version of OpenMP is installed or even where to look for this information.
Compilation environment informations (obtained by doing make env):
make: python: Command not found
/bin/sh: mpicxx: command not found
make: python: Command not found
make: python: Command not found
VERSION :
SMILEICXX : mpicxx
OPENMP_FLAG : -fopenmp -D_OMP
HDF5_ROOT_DIR :
FFTW3_LIB_DIR :
/bin/sh: python: command not found
/bin/sh: python: command not found
SITEDIR :
PYTHONEXE : python
PY_CXXFLAGS :
PY_LDFLAGS :
CXXFLAGS : -D__VERSION=\"\" -DOMPI_SKIP_MPICXX -std=c++14 -Isrc -Isrc/Pusher -Isrc/Python -Isrc/Radiation -Isrc/Field -Isrc/ParticleBC -Isrc/ParticleInjector -Isrc/Checkpoint -Isrc/Collisions -Isrc/MultiphotonBreitWheeler -Isrc/Ionization -Isrc/Merging -Isrc/ElectroMagn -Isrc/Interpolator -Isrc/ElectroMagnBC -Isrc/SmileiMPI -Isrc/Species -Isrc/ElectroMagnSolver -Isrc/Patch -Isrc/DomainDecomposition -Isrc/Profiles -Isrc/Projector -Isrc/Particles -Isrc/picsar_interface -Isrc/Params -Isrc/PartCompTime -Isrc/Diagnostic -Isrc/MovWindow -Isrc/Tools -Ibuild/src/Python -O3 -g -fopenmp -D_OMP
LDFLAGS : -lhdf5 -lm -fopenmp -D_OMP
GPU_COMPILER :
GPU_COMPILER_FLAGS :
COMPILER_INFO :
Running environment informations (job log):
SCRIPT_PID=174017
/bin/bash -x /tmp/tmp.THao4k0i2U
set +x
unset _mlshdbg
'[' 1 = 1 ']'
case "$-" in
set +x
unset _mlshdbg
module purge
local _mlredir=0
'[' -n '' ']'
case " $@ " in
'[' 0 -eq 0 ']'
_module_raw purge
unset _mlshdbg
'[' 1 = 1 ']'
case "$-" in
set +x
module dfldatadir/own (Data Directory) cannot be unloaded
Unloading datadir/own
ERROR: Dependent dfldatadir/own is loaded
Unloading datadir/celia
ERROR: Dependent datadir/own and dfldatadir/own are loaded
Unloading ccc/1.0
ERROR: Dependent datadir/celia and datadir/own and dfldatadir/own are loaded
set +x
unload module hdf5/1.12.0
unload module flavor/hdf5/serial
load module flavor/hdf5/parallel
[36mWARNING: the loaded flavor/hdf5/parallel does not exists, so flavor/hdf5/parallel intel--20.0.2__openmpi--4.0.1/parallel is used instead[0m
load module hdf5/1.12.0
Hello!
I noticed that when running Smilei on TGCC's Irene cluster and Rome partition (I haven't tried on other partitions of this cluster) (machine=joliot_curie_rome for compilation), in the Initializing MPI part of Smilei's log, this prints:
My colleagues at CELIA laboratory running Smilei on the same partition have the same print. I tried recompiling Smilei differently by removing
config=no_mpi_tm
from the make command. This enables MPI threads but OpenMP task parallelization remains not activated. Forcing a compiler option by settingexport MPICXX='mpicxx -fopenmp'
had no effect. Settingexport OMP_PROC_BIND=true
andexport OMP_PLACES=cores
had no effect either.Thus, I have three questions for you please:
Question 1: I don't know how are MPI threads related to OpenMP. The machine file recommends not enabling the former for some reason. But I could compile the code successfully when removing
config=no_mpi_tm
. The instruction to add this option being 3 years old, maybe it is just no longer necessary? Is it even possible for OpenMP tasks to be effective when MPI threads are not enabled?Question 2: At CELIA, we are also wondering how things work when we have multiple threads per mpi process (64 in my case) but OpenMP is not activated. Indeed, using several MPI threads still seems to run the code faster compared to using only one, despite the non-activation of OpenMP task parallelization.
Question 3: Last but not least, do you have any hint how to solve this issue and enable OpenMP please?
Additionnal infos:
Smilei version : 5.1-34-g60be16288-master
I have no idea of what version of OpenMP is installed or even where to look for this information.
Compilation environment informations (obtained by doing
make env
):Running environment informations (job log):
SCRIPT_PID=174017
/bin/bash -x /tmp/tmp.THao4k0i2U
set +x
unset _mlshdbg
'[' 1 = 1 ']'
case "$-" in
set +x
unset _mlshdbg
module purge
local _mlredir=0
'[' -n '' ']'
case " $@ " in
'[' 0 -eq 0 ']'
_module_raw purge
unset _mlshdbg
'[' 1 = 1 ']'
case "$-" in
set +x module dfldatadir/own (Data Directory) cannot be unloaded
Unloading datadir/own
ERROR: Dependent dfldatadir/own is loaded
Unloading datadir/celia
ERROR: Dependent datadir/own and dfldatadir/own are loaded
Unloading ccc/1.0 ERROR: Dependent datadir/celia and datadir/own and dfldatadir/own are loaded
Thanks for helping! Howel