-
Can we do Multi-GPU over MPI?
Here is a summary of distributed computing capabilities
https://pytorch.org/docs/stable/distributed.html
AFAIK there are no "distributed tensors" or anything like …
-
I was trying to run Malamute in HPC cluster with 24 cores. The program runs smoothly in series with the command;
`malamute-opt -i dcs5_5_mm_constant_properties.i >& log.out`
For MPI, the executi…
-
Hi, I am using dynesty with MPI for parallel computation in my project, and I’ve encountered an issue with progress display. When running the code on a single core, the progress is displayed correctly…
-
Dear @martinjrobins,
Thank you for developing Aboria! I've thoroughly enjoyed using this library for analyzing molecular dynamics data in my work. One of the things that I've been curious about is …
-
I am wondering if it is possible to "mimick" a hybrid parallelization approach with `mpi4py` using `ipyparallel`?
I.e. have each engine access several cores?
By hybrid I mean using mpi4py for communi…
-
### Discussed in https://github.com/orgs/mfem/discussions/4488
Originally posted by **CINTROINI** September 5, 2024
Dear MFEM community,
We are developing a new code based on MFEM to simu…
-
I got a few questions regarding the implemented parallelism in triqs:
(1) I linked triqs successfully against the intel mpi and mkl (all tests pass). But if I try to run any of the examples in paralle…
-
Looking at the code I have a hard time imagining that this ever worked.
-
-
At the moment there are no CI in the repository. Matrix would be:
- [ ] compiler: GCC, AOCC
- [ ] precision: single, double, long double, quad
- [ ] parallelization: sequential, thread, OpenMP
-…