-
I was looking into the regridder API for some benchmarks that we are running (see [here](https://github.com/coiled/benchmarks/blob/main/tests/geospatial/workloads/regridding.py)) and noticed that the …
-
Hello,
running a job using parallelization via MPI causes following error:
```
singularity exec abaqus-2017-centos-7.simg abaqus j=abaqus_test2017 cpus=8 interactive
```
results in
```
Analy…
-
I am trying to run some chains in parallel on a LSF cluster but I get the following error message
```
Initialising ensemble of 128 walkers...
Initialising ensemble of 128 walkers...
Traceback (mos…
-
### Description
On `trunk-minor`:
- [x] Deprecate `gpu_ids`.
- [x] Replace with `gpu_id`.
On `trunk-major`:
- [x] Remove the multi-GPU code.
### Motivation and context
Multi-GPU was specifica…
-
@anderkve: Do you have any idea about the overhead of MPI for DE and Multinest? I was tring to run the computations of fram with using the MPI functionality and several cores. I did not get the speedu…
-
Looking at the lines 19-20 on the CMakeLists.txt
```
option(NUDUSTC_ENABLE_OPENMP ON "Use OpenMP for cell/particle parallelization")
option(NUDUSTC_ENABLE_MPI ON "Use MPI for cell/particle paral…
-
This issue is to keep track of the necessary steps towards an initial parallel octree implementation in Trixi. It should thus be amended as we progress and gather more experience about which ideas wor…
-
Hi!
I'm trying to use `UltraNest` with a very expensive likelihood whose evaluation at a single point of the parameter space needs to be parallelized using MPI.
The likelihood is automatically vec…
-
**Description of Problem**
AutoDock-GPU is a good use case for OpenCL (or OpenMP) parallelization across multiple GPUs (or xPUs) on a node of a supercomputer. We want to know how well an MPI parall…
-
Hello,
I am supporting a user at our University HPC center who is getting the error "Please use only 1 MPI process! We are currently working on the MPI parallelization of model selection." using ve…