conda-forge / gpaw-feedstock

A conda-smithy repository for gpaw.
BSD 3-Clause "New" or "Revised" License
7 stars 7 forks source link

GPAW with intelmpi #35

Closed vladislavivanistsev closed 2 years ago

vladislavivanistsev commented 2 years ago

Comment:

Is it in principle possible to make a version of GPAW using intelmpi from conda-intel channel (https://anaconda.org/intel/impi_rt)? That includes compilation of other libraries including libvdwxc and elpa using intelmpi. I am asking because of a marked difference in speed between opempi and mpich versions of conda/GPAW 22.8. So, it is curious to know whether intelmpi is even faster.

gdonval commented 2 years ago

You can make it yourself, either manually or using conda-build and the feedstock from here to compile your own version (after modification).

There are multiple problems with providing a version here. The first one being that anything on conda-forge ought to be compilable using packages from conda-forge only.


There are performance differences between mpi implementations but they should vanish for a balanced system in typical DFT applications (e.g. if you are not using using far too many cores for your atomic system size). If you see a marked difference at reasonable parallel scales, there must be a configuration problem (like you are not using the appropriate PML backend for your hardware or you are actually not using the Infiniband interface, etc.).

vladislavivanistsev commented 2 years ago

Thank you for the clarification that building intelmpi might be possible, yet can not be part of the conda-forge.

The difference in speed is 5–10% in favour of openmpi 4.1.4 vs mpich 4.0.2 on 24 CPUs. There might be a problem with our HPC configuration which can not change it.

The reason I started using mpich version is incompatibility of openmpi 4.1.4 with ucx 1.13. With mpich I can explicitly set UCX for PML via export OMPI_MCA_pml="ucx".