-
Hello, maintainers.
I'm currently trying to integrate STRUMPACK CUDA support into PETSc and found that STRUMPACK_USE_MPI and STRUMPACK_USE_CUDA cannot be switched ON at the same time because nvcc…
-
Install the latest 2022 release of GROMACS initially on Kathleen.
https://manual.gromacs.org/documentation/2022/download.html
May need to wait for Spack to be available centrally to install.
-
### Background
Interesting problem here, hoping someone can help (@danielpeter?). I'm working with @ykane and his student to run inversions with SeisFlows on their [GPU cluster in Japan](https://ww…
bch0w updated
2 months ago
-
Hi, I wonder why the P2P communication in FTPipe is implemented by CUDA-Aware MPI instead of NCCL?
Maybe perform better or anything else?
BTW, can I run this repo without re-compiling CUDA-Aware …
-
### 🐛 Describe the bug
Hi! I'm trying to run PyTorch with CUDA-aware MPI as backend and I'm consistently running into the same error with any collective operation. I am pretty confident I have instal…
-
### 🐛 Describe the bug
```python
import torch
torch.distributed.init_process_group(backend="mpi")
nccl_group = torch.distributed.new_group(backend="nccl")
```
```
[rank0]: Traceback (most r…
-
It would be good if CUDA-aware MPI can be packaged. Should it be the default MPI we use for Yggdrasil packages?
-
Dear Sir:
I cannot understand alpaka abstraction. Could you please teach me how to use ISAAC in my MPI+CUDA code with structured mesh ?
Sincerely,
LI JIAN
CHINA
-
### Summary
Support non-virtual dependencies in `lmod:hierarchy` in `modules.yaml`. Some packages optionally come with support for non-virtual dependencies (e.g. CUDA, see below) that would make se…
-
Hi, I'm trying to build Hypre with cuda in my linux laptop and get this error both with make and cmake. I'm trying to use the intel mpi libraries. Thank you for your help.
Marcos
$ cmake -DHYPRE_…