Closed BrianMarre closed 1 month ago
@ikbuibui please add the cfg you are using and maybe a few notes regarding the CI workflow. Thanks!
The way to use the atomic physics test can be simply to copy the test code outside the picongpu source, and execute
./validate/validateLocally.sh
I have added the CFGs I am using for the JUBE CI scaling tests, in folders inside etc/
corresponding to the tests. These CFGs are designed to run on NVIDIA A100 (40 GB) GPUs. The single/1.cfg
is a single GPU test, for the other CFGs the number (eg. strong_tiny/4.cfg
) refers to the number of nodes.
One can execute these CFGs with TBG, but validation wont be performed. The user will have to use the custom submitAction.sh
provided with the cfg, or copy the validation folder to the location of the results manually. Finally the user must execute validate.sh
. The user should be careful to set the correct CFG and submitAction
paths.
In the JUBE CI, the workflow is as follows
JUBE calls TBG for the scaling test we want to perform (weak/strong, different node sizes). Each scaling test has a set of CFG files associated. JUBE runs the CFGs one by one.
Using a slightly modified submitAction.sh
, the TBG call copies over the validation folder to where the results are generated.
In the TPL file, after the srun
to start the simulation, we call the validation script and we are done.
Updated the PR to not hold all the scaling CFGs. Now there are two CFGs in etc/
, written for use on nodes with 4 NVIDIA A100 (40 GB) GPUs.
1.cfg
holds a single GPU config.
4.cfg
holds a 4 node, 4 GPUs per node config.
Whats the difference to tests/compileAtomicPhysics
. If we have an example than the atomic physics code is already compile test and there is no explicit compile test required.
adds the atomic Physics test setup and validation scripts used for runtime tests.