-
## Reason
GPU-aware MPI at this point might not outperform for GPU applications yet. But it is good to have a GPU-aware MPI, so that we can do some comparison.
## Design
The GPU-aware M…
-
hpc-stack module locations across ufs-wm and srw-app have been separately maintained. Following installations are needed to follow up this synchronization case. This will provide more efficient hpc-st…
-
**Originally by _koziol_ on 2011-04-28 13:54:33 -0500**
---
This proposal replaces the current nonblocking collective I/O routines (in the form of split collectives) with routines for immediate nonb…
-
In some applications, there is read-only global state that needs to be distributed among workers. This can be implemented via multiprocessing.Pool's initializer and initargs options. The nonuniformi…
-
In a project of mine I am using the cmake statement `check_include_file(mpi-ext.h HAVE_MPI_EXT_H)` to see whether MPI provides the file `mpi-ext.h`. Surprisingly this check fails for OpenMPI 5.0.0. Th…
-
I am attempting to use openEMS mpi and I'm getting the following errors:
##############
Running remote openEMS_MPI in working dir: /tmp/openEMS_MPI_OxYoCbMoLCNW
warning: implicit conversion from nu…
-
MPI 4.0 has introduced new info keys that allow applications to signal the non-use of some MPI P2P semantics that otherwise might impose unwanted overhead, namely "mpi_assert_allow_overtaking", "mpi_…
-
**Originally by _jjhursey_ on 2012-02-27 06:20:57 -0600**
---
# Original## Problem
Section 8.3 is imprecise about the set of processes over which `MPI_ERRORS_ARE_FATAL` is applied. Since the error h…
-
### What is your question?
I really liked the HTA and the data it generates is quite useful. I think same analysis can be useful for other cases as well hence I was wondering if trace generated for o…
-
## Background information
### What version of Open MPI are you using?
4.1.2
### Describe how Open MPI was installed
Downloaded from https://download.open-mpi.org/release/open-mpi/v4.1/open…