ComputationalRadiationPhysics / picongpu

Performance-Portable Particle-in-Cell Simulations for the Exascale Era :sparkles:
https://picongpu.readthedocs.io
Other
694 stars 218 forks source link

modular output #4061

Closed cbontoiu closed 2 years ago

cbontoiu commented 2 years ago

Hello,

I tried to follow the modular approach when using the openPMD plugin, but the output data came in an unexpected form. Although I asked for fields, derived fields and particles, I can fetch only the particles. Please see then snapshots below.

image

This is my fileOutput.param file

image

and this is my .cfg file

image

steindev commented 2 years ago

Hey @cbontoiu, I can't see anything obvious wrong at the moment. For further debugging, could you please perform the following instructions in your simulation run directory: (1) run ./input/bin/picongpu --help and post the output here. (2) post the full command with which picongpu is executed. You can find this at the and of tbg/submit.start.

sbastrakov commented 2 years ago

@cbontoiu how do you load your data before calling ts.slider() ? Now you output to different openPMD files, as opposed to our default simData for everything. So perhaps the viewer doesn't "see" all the data and maybe e.g. different viewers should be loaded for different parts of your data

steindev commented 2 years ago

In case the comment of @sbastrakov does not solve the issue already, please also check your ./simOutput/openPMD directory if the three files per time step exist as you expect. If so, please perform bpls -l <DataFileName>.bp on each of the three for a specific time step (i.e. the zeroth) and post the output here, too.

Btw, bpls is part of the ADIOS2 library and located in $ADIOS2_ROOT/bin.

cbontoiu commented 2 years ago

Hey @cbontoiu, I can't see anything obvious wrong at the moment. For further debugging, could you please perform the following instructions in your simulation run directory: (1) run ./input/bin/picongpu --help and post the output here. (2) post the full command with which picongpu is executed. You can find this at the and of tbg/submit.start.

For me this command does not work

image

The commands that I use to compile and run from the terminal are:

conda deactivate && source $HOME/src/spack/share/spack/setup-env.sh && spack load picongpu && spack load openpmd-api && export PIC_BACKEND="cuda:75" && export OMPI_MCA_io=^ompio

rm -r .build/ && pic-build &> build_log.txt && tbg -s bash -c etc/picongpu/runConfiguration.cfg -t etc/picongpu/bash/mpiexec.tpl /media/quasar/RawDataDisk1/PICONGPU/picongpu-0.6.0/OUT_LAYERS-GRAPHENE-2D/LAYERS-GRAPHENE-2D_ION_3_XY[nm]_1600_2050_WL[nm]_100.0_I[Wcm-2]_1.0e+21_A0_2.70_Dt[fs]_1.00_w0[nm]_400_f0[nm]_250_pol_LX_msh_11840_15168_ppc_10_un29b_particles_e
sbastrakov commented 2 years ago

When trying to run help, you need an environment compatible with how you built PIConGPU. So e.g. same profile pre-loaded via spack load picongpu or other way you normally do it.

cbontoiu commented 2 years ago

As discussed previously

https://github.com/ComputationalRadiationPhysics/picongpu/issues/4051

I normally write all data in one folder called simData and the openPMD Viewer works both through the slider and through the API but writing data all bundled in a single openPMD plugin limits the resolution of my results due to memory limitations. So because I dump E, B, J, chargeDensity, energyDensity and particles position, momentum and ID, it made sense to split the output in three openPMD plugins (fields, densities and particles). This indeed allowed me to use better resolutions without reaching the memory limits of my machine. But, then I discovered that only particle data was correctly saved to the disk and fields, could not be reached, although data was there, as seen from the ~GB size of each field related data snapshot file. So here is the problem for you to investigate:

If splitting the output of the openPMD plugin similarly to how I did produces fields which can be correctly retrieved through the openPMDViewer.

Due to this issue, I now reverted to running the same model several time, each time asking for something else and this allows me to tune the mesh and PPC, if certain quantity needs more memory, without compromising the other results:

image

Of course this adds some complexity as data needs to be loaded from different locations and the total waiting times is larger, but it seems the safest way to push PIConGPU performance to the limits on my machine.

cbontoiu commented 2 years ago

When trying to run help, you need an environment compatible with how you built PIConGPU. So e.g. same profile pre-loaded via spack load picongpu or other way you normally do it.

image

steindev commented 2 years ago

@cbontoiu Your a missing the rest of your commands to setup the environment: spack load openpmd-api && export PIC_BACKEND="cuda:75" && export OMPI_MCA_io=^ompio

But as you said there are files created and they have a reasonable size, running ./picongpu --help is not necessary anymore. (Please still try the bpls on the created files as described above.) In the end, I guess you don't provide the correct file pattern to openPMD-viewer. As I don't use it, I can't help you further.

Actually, I think you should open an issue there if the behavior you observe with openPMD-viewer is not as expected.