-
Ensure N-body and sinks are correctly computed and communicated between MPI nodes
-
I'm trying to save MPI wisdom, and I'm getting strange behavior.
Since my understanding is that fftw wisdom accumulates, I figured the easiest thing I can do is create an empty file, then read wisdom …
-
There is around a 4-5 factor slowdown when using 32 cores over 2 nodes compared to 16 cores on a single node. The slowdown persists even for 128 cores across 8 nodes.
This consistently occurs for t…
-
# Problem
Currently, the MPI_T event interface allow to share information about internal in form of basic integer (and floating point) elements part of a larger event definition. However, this curr…
-
```
$ mpirun ./main 5 its/0.xml | grep -i GSL
[GSL Random Number Generator] generator type: mt19937
[GSL Random Number Generator] seed = 1515443941
[GSL Random Number Generator] first value = 4052…
-
- [x] I read [the conda-forge documentation](https://conda-forge.org/docs/user/introduction.html#how-can-i-install-packages-from-conda-forge) and could not find the solution for my problem there.
…
-
Next steps for the MPI module.
## Features
- [ ] Extend functionality
- Extend to MPI-2 and 3 routines
- We currently support most of MPI-1, and a random subset of MPI-2 and MPI-3.…
-
```
Several possible ways to optimize MPI part of the code: 1) use one buffer
for all MPI communications; 2) possibly use MPI_ALLTOALL and 'derived
datatypes' for block_transpose; 3) use 'persistent c…
-
```
Boost MPI has been used with:
- MPICH on windows/linux (1 node, 4 cpus, 4gb ram; 20 nodes, 2 cpus, 2gb
ram, 100Mbit lan)
- MPICH2 on windows (1 node, 4 cpus, 4gb ram)
- OpenMPI on linux (168 node…
-
Hey, I just wanted to ask about the feasibility of using this library as a replacement for a large-scale parallel solver like PETSc or hypre. Looking at the code for `cg.jl` it seems like all that is…