ECP-copa / ExaMPM

Material point method proxy application based on Cabana.
BSD 3-Clause "New" or "Revised" License
9 stars 11 forks source link

MPI boundary issue #38

Open streeve opened 1 year ago

streeve commented 1 year ago

From Mark on slack:


I've been using ExaMPM (DamBreak) for performance and profiling on one Crusher node and continue to see what look like communications errors leading to spurious new velocities at processor boundaries which then lead to numerical blow-up and crashes (almost always in g2p->scatter->packBuffer). This only occurs for problems over about 50^3 cells and 4 or more MPI ranks (I test with srun -N1 -n8 -S16 --exclusive -t30:00 --cpus-per-task=1 --threads-per-core=1 --gpus-per-task=1 --gpu-bind=closest ./DamBreak 0.01 2 3 0.00004 10.0 2500 hip) and does not go away with wider halos, evenly-divisible ny, different Y boundary conditions (periodic, slip, noslip), or different versions of Cabana (0.5.0, head). It seems to be suppressed somewhat with fewer particles per cell and by using AMD_SANITIZE_KERNEL and AMD_SANITIZE_COPY vars, but never goes away. For a while, it seemed to always happen 2 or 3 time steps after a Silo write, but still occurs without any Silo writes. It can occur anywhere between steps 5000 and 100000.

kwitaechong commented 1 year ago

I have tried to regenerate those error several times. But it always behave correctly. I have used this batch scripts.

!/bin/bash

SBATCH -A CSC304_crusher

SBATCH -J dam_break

SBATCH -o dam_break.out

SBATCH -t 00:60:00

SBATCH -p batch

SBATCH -N 1

srun -n8 -S16 --exclusive --cpus-per-task=1 --threads-per-core=1 --gpus-per-task=1 --gpu-bind=closest /autofs/nccs-svm1_home1/kc4/Software/build_crusher/ExaMPM/examples/DamBreak 0.01 2 3 0.00004 20.0 2500 hip