-
Hello,
My apologies to trouble you with this. I've been trying to compile and run the ALPS example Knap with MPI.
Here are my steps:
1. `wget https://raw.githubusercontent.com/coin-or/coinbre…
-
Hello,
This is to report an issue we are seeing with MPICH on Intel GPUs (related to an IOR issue from @pkcoff).
If we run a code (reproducer below) which calls MPI_File_write_at with a GPU devi…
-
in this line `MPI_Allreduce(counts.data(), counts_reduced.data(), counts.size(), MPI_UNSIGNED, MPI_SUM, MPI_COMM_WORLD);` in /include/cstone/tree/update_mpi_gpu.cuh, is `counts.size()` different among…
-
### What happened?
I am getting failures of this type:
```
Completed case 0: Test MPI Communicator Split
0 tests failed out of 1.
Completed case 0: Test MPI Communicator Split
0 tests failed…
-
As discussed in CSEG we want to remove the use of mpi-serial in our tests and simulations. This is largely because modern MPI libraries allow you to link with the MPI library but still run serially WI…
-
Would avoid the mismatch problems caused by the f90 MPI modules in some MPI frameworks
aradi updated
1 month ago
-
We currently perform MPI-based tests under Python in two different ways
- running `mpirun -np N python -m pytest ...` on the `pytest/mpi/{2,3,4}` directories (*direct approach*)
- running PyNEST ind…
-
With the following configure line
./configure -bfd=download -ompt -c++=icpc -cc=icc -fortran=ifort -mpi -pthread -tag=oneapi-2023.2.0-impi
I get the failure from below although the MPI library i…
-
In MOSART - the calls using
`shr_mpi_sum, shr_mpi_max, shr_mpi_min, mpi_barrier, mpi_bcast
`could be replaced by calls to
`ESMF_VMAllReduce and ESMF_VMBroadcast.
`
The following must be taken …
-
Problem: multi-node MPI hello world on slurm-launched flux on frontier fails with
```
$ flux run --env=LD_LIBRARY_PATH=$(dirname $(flux config builtin pmi_library_path)):$LD_LIBRARY_PATH -n2 -N2 ./m…