-
[issue stat img]: https://img.shields.io/issuestats/i/github/sourceryinstitute/OpenCoarrays.svg?style=flat-square
| Avg response time |
|:-------------------------…
-
New day, new issues.
I just tried the latest amd software stack on Frontier:
```
module load cpe/23.12
module load PrgEnv-amd
module load amd/5.7.1
module load craype-accel-amd-gfx90a cmake cray…
-
### What happened?
The outermost atlas `mpi::Scope` does not restore the prior default eckit MPI communicator when it goes out of scope.
See code snippet below.
I suspect the problem is this:…
-
The MPI api is int32. Given a big enough chunk this can crash the bbts. To reproduce use chunks of a large size like 32k x 32k and make sure there is some communication. We need a workaround for this.
-
We can either depend on compile-time on mpi4py or, as in openPMD, mirror the C APIs:
https://github.com/openPMD/openPMD-api/blob/0.15.1/src/binding/python/Series.cpp#L41-L154
ax3l updated
7 months ago
-
see MPI 4.0 19.1.2
-
**Environment:**
1. Framework: TensorFlow
2. Framework version:v2
3. Horovod version: 0.19.5
4. MPI version: 4.0.0
5. CUDA version:NONE
6. NCCL version: NOne
7. Python version:3.6.9
10. OS and…
-
https://stackoverflow.com/questions/14004457/error-loading-mpi-dll-in-mpi4py
https://github.com/openai/large-scale-curiosity/issues/15
Anyone got this error?
I have a bunch of problems when I…
-
The core RVIC routing scheme has been implemented in VIC_image. The next step is to implement it with MPI.
**There are two options to implement the routing with MPI:**
1. The execution of VIC is orga…
-
JuML should check for CUDA aware MPI implementation and avoid data copy in front of MPI calls.