Open Lucas-Lucas1 opened 6 days ago
Hi @Lucas-Lucas1,
Thanks for reaching out. Did you already read https://warpx.readthedocs.io/en/latest/usage/workflows/domain_decomposition.html ?
To guide you a bit more, can you post the inputs and submission scripts you are using?
Hi @Lucas-Lucas1. It would also be helpful to know how many GPUs (and what kind) are you trying to run this simulation on and how many particles do you have in total? Note that WarpX permanently keeps the particle quantities on GPU memory since moving them between the GPU and CPU is time consuming. For this reason you have to have enough total GPU memory to fit all the particles in your simulation. In my experience a 40Gb A100 GPU can hold about 200 million particles, so if I want to run a large simulation with, say, 800 million particles I need to use at least 4 A100 GPUs.
Thanks for your responses. In fact, I haven't yet reached the part regarding Domain Decomposition
, I will study it as soon as possible.
Below are my input script test.py
and submission script sbatch.sh
.
test.py.txt
sbatch.sh.txt
My cluster consists of 9 NVIDIA DGX-A100 high-performance computing servers. Each server is equipped with dual AMD ROME 7742 64C128T processors, 1TB DDR4 memory, 8 NVIDIA TESLA A100 40GB SMX4 acceleration cards.
When performing 3D simulations, I want to divide a large number of grids,
2048*64*2048
, but I encounter the error :Backtrace.0.txt
In this case, what can I do to resolve the issue? As a new WarpX user, there are many details in the official documentation that I’m still learning.
Additionally, I want to apply a time-varying external electromagnetic field in a specific region. I’ve reviewed case
5046
, but I noticed that usingif(..., ...)
statements there caused issues. Does the latest version of WarpX support setting this up with if statements, or is there a better method now?