Open rezaplasma opened 1 year ago
Hi @rezaplasma, thank you for reporting this.
Would you like to attach here the input file that was used and also the cmake compilation flags that were used to build the executable, so that we can try to reproduce the issue?
In general, not all boundary conditions work with all algorithmic options in WarpX, but PSATD is definitely supported also for RZ geometry, so we might want to look more in detail to your input file.
Hi @EZoni
Thanks for your reply I'm running with python (PICMI scripts) The script is attached
When i set method PSATD
i face that error
@EZoni
I have installed warpx with conda package
Do i need to install build option Warpx_PSATD
?
@rezaplasma Did you try to use 'open'
instead of 'dirichlet'
in the PICMI script?
This should work, in principle, with the PSATD solver.
@RemiLehe
Hi
I used open
for boundaries
lower_boundary_conditions = ['none', 'open'],
upper_boundary_conditions = ['open', 'open'],
The below error was received :
0::Assertion `field_boundary_lo[1] != FieldBoundaryType::PML && field_boundary_hi[1] != FieldBoundaryType::PML' failed, file "/home/conda/feedstock_root/build_artifacts/warpx_1683172402631/work/Source/WarpX.cpp", line 934, Msg:
### ERROR : PML are not implemented in RZ geometry along z; please set a
# different boundary condition using boundary.field_lo and
# boundary.field_hi.
I think the open
condition is not available for PSATD solver!!
Yes, that's correct. In summary:
open
dirichlet
Does that solve your issue?No , it is not resolved
Using open
boundaries and method= 'PSATD'
for
solver didn't solve this issue !
And i got the below error(mentioned above)
0::Assertion `field_boundary_lo[1] != FieldBoundaryType::PML && field_boundary_hi[1] != FieldBoundaryType::PML' failed, file "/home/conda/feedstock_root/build_artifacts/warpx_1683172402631/work/Source/WarpX.cpp", line 934, Msg:
### ERROR : PML are not implemented in RZ geometry along z; please set a
# different boundary condition using boundary.field_lo and
# boundary.field_hi.
@rezaplasma I think you can try
lower_boundary_conditions = ['none', 'damped']
upper_boundary_conditions = ['open', 'damped']
which sets "damped" BCs along z and PML at $r_{max}$.
@dpgrote Can you confirm that PMLs are not implemented along z for RZ geometry with the PSATD solver?
That is correct - with the RZ PSATD solver, the PML is only implemented in the radial direction.
@EZoni
There are no conditions called damped
mentioned in the documentation !!
https://warpx.readthedocs.io/en/latest/usage/python.html#cylindricalgrid
Thanks, we might have forgotten to add it to the WarpX/PICMI documentation. But I think the code should work if you set "damped" through PICMI. Let us know if that's the case. We will fix the documentation asap.
Thanks of you I will try it and let you know
Using damped
should work with PICMI. I don't think there was an intentional decision not to add damped
to the standard, at least not beyond the thought that it is WarpX specific. I don't think that there is any issue with adding it except that perhaps there needs to be a mechanism to specify that shape and extent of the damping function.
Yes you are right thanks for your comprehensive explanation ,I don't have access to my local system and also clustered system and I can't try it
Hi @dpgrote @EZoni @RemiLehe
I tried:
lower_boundary_conditions = ['none', 'damped']
upper_boundary_conditions = ['open', 'damped']
With below inputs for domain decomposition:
# Number of cells
nr = 320
nz = 1280
# Domain decomposition
max_grid_size = 64
blocking_factor = 32
I got below error:
domain size in direction 0 is 320
blocking_factor is 512
amrex::Error::0::domain size not divisible by blocking_factor !!!
Then with changing inputs as following:
nr = 512
nz = 1280
max_grid_size = 64
blocking_factor = 32
I again and again got below error:
--- INFO : Writing openPMD file warpx_rz/diags000000
STEP 1 starts ...
**** WARNINGS ******************************************************************
* GLOBAL warning list after [ FIRST STEP ]
*
* --> [! ] [PML] [raised once]
* Using PSATD together with PML may lead to instabilities if the plasma
* touches the PML region. It is recommended to leave enough empty space
* between the plasma boundary and the PML region.
* @ Raised by: ALL
*
********************************************************************************
STEP 1 ends. TIME = 1.563581696e-16 DT = 1.563581696e-16
Evolve time = 4.173124357 s; This step = 4.173124357 s; Avg. per step = 4.173124357 s
STEP 2 starts ...
Segfault
/usr/bin/addr2line: '/home/reza-kh/warpx': No such file
/usr/bin/addr2line: '/home/reza-kh/warpx': No such file
/usr/bin/addr2line: '/home/reza-kh/warpx': No such file
.
.
.
In order to understand the segfault we would need to have more information. You could follow the guidelines available in the documentation here, most importantly compile with -DCMAKE_BUILD_TYPE=Debug
, in order to get more meaningful error messages and backtraces. Please post here the full output of the simulation (including info on domain decomposition at the beginning of the output) as well as one relevant backtrace file once you have run the simulation in DEBUG mode.
Regarding the first error you got, related to blocking_factor
being 512
and not compatible with the domain size of 320
in the radial direction, I am a little surprised of that. There should be no domain decomposition in the radial direction in RZ geometry, and the code should automatically account for that. This is what is also stated in our documentation here:
When using the RZ spectral solver, the values of
amr.max_grid_size
andamr.blocking_factor
are constrained since the solver requires that the full radial extent be within a each block. For the radial values, any input is ignored and the max grid size and blocking factor are both set equal to the number of radial cells.
@dpgrote Do you understand why the code is not automatically setting blocking_factor
to the number of cells radially?
Hi @EZoni
thanks for your attention
I will share the backtrace
file and output
of simulation in next few days
for first error , when I set any blocking_factor
value, I got that error that make me amazed
This issue you saw amrex::Error::0::domain size not divisible by blocking_factor !!!
is fixed in PR #4073. Through a complicated chain of effects, there was a bug that was not allowing nr
to not be a power of 2.
Hi @dpgrote
that's what I have understood that I have to consider when using rz geometry with PSATD solver:
blocking_factor
must be a power of 2 and max_grid_size
is not necessary a integer multiple of blocking_factor
and on other hand, what is important is that the number of cell (nr)
must be power of 2 (in addition to be divisible by blocking_factor)
I could run with :
# Number of cells
nr = 512
nz = 1600
# Domain decomposition
max_grid_size = 50
blocking_factor = 32
Although the second error mentioned above still exists
Hi @EZoni
AMReX (23.05) initialized
PICSAR (1903ecfff51a)
WarpX (23.05-nogit)
__ __ __ __
\ \ / /_ _ _ __ _ __\ \/ /
\ \ /\ / / _` | '__| '_ \\ /
\ V V / (_| | | | |_) / \
\_/\_/ \__,_|_| | .__/_/\_\
|_|
Level 0: dt = 1.250865357e-16 ; dx = 2.34375e-07 ; dz = 3.75e-08
Grids Summary:
Level 0 50 grids 204800 cells 100 % of domain
smallest grid: 128 x 32 biggest grid: 128 x 32
-------------------------------------------------------------------------------
--------------------------- MAIN EM PIC PARAMETERS ----------------------------
-------------------------------------------------------------------------------
Precision: | DOUBLE
Particle precision: | DOUBLE
Geometry: | 2D (RZ)
| - n_rz_azimuthal_modes = 2
Operation mode: | Electromagnetic
| - vacuum
-------------------------------------------------------------------------------
Current Deposition: | direct
Particle Pusher: | Boris
Charge Deposition: | standard
Field Gathering: | energy-conserving
Particle Shape Factor:| 3
-------------------------------------------------------------------------------
Maxwell Solver: | PSATD
| - update with rho is ON
| - current correction is ON
| - collocated grid
Guard cells | - ng_alloc_EB = (16,16)
(allocated for E/B) |
-------------------------------------------------------------------------------
Moving window: | ON
| - moving_window_dir = z
| - moving_window_v = 299792458
-------------------------------------------------------------------------------
For full input parameters, see the file: warpx_used_inputs
--- INFO : Writing openPMD file warpx_rz/diags000000
STEP 1 starts ...
Segfault
/usr/bin/addr2line: '/home/reza-kh/warpx': No such file
/usr/bin/addr2line: '/home/reza-kh/warpx': No such file
the output of simulation and backtrce
file that you requested
Thank you, @rezaplasma.
I get a segfault if I try to run the test on GPU on my local computer (with -DWarpX_COMPUTE=CUDA
).
No segfault if I run on CPU (with -DWarpX_COMPUTE=OMP
).
I have not tried GPU on a HPC cluster for this specific test.
In particular, I get (running the WarpX executable directly on the input deck produced by PICMI):
terminate called after throwing an instance of 'blas::Error'
what(): device BLAS not available, in function set_device
@rezaplasma Are you compiling and running on CPU or GPU?
@ax3l @dpgrote @RemiLehe The error above seems to be related to BLAS, hence the question: do we have to do something special if we want to run RZ PSATD on GPU on a local machine, as far as BLAS++ and LAPACK++ are concerned? So far I have run RZ PSATD on GPU only on HPC clusters and it was always fine. I wonder if I missed something from the documentation about setting up those libraries, locally, so that they work correctly with both CPU and GPU. Update on this question: I think I have to recompile BLAS++ and LAPACK++ by hand on GPU, somehow I was convinced that we were doing it automatically on the fly.
Hi @EZoni
Thanks of you
I'm running on local computer on CPU to test, I am using PICMI script that is run in base environment ( i.e. conda activate warpx )
Now, do you mean that I can run as following!!?
-DWarpX_COMPUTE=OMP python script.py
I see, if you get segfault on CPU then I have to investigate further, because I cannot reproduce the issue at the moment. When I run on CPU locally, using the PICMI script (but without a conda environment), the code doesn't seem to crash.
Maybe @ax3l will be able to comment on whether some compilation flags need to be tweaked for the conda environment to be set up correctly in this case.
I have also run Without conda but i got the same error. I am really confused because I can not know what is causing this issue and the time is passing Anyway thanks @EZoni
hi
I'm running with
rz
geometry. when I usedPSATD
option for method inElectromagneticSolver
, I got below error:PEC boundary not implemented for PSATD, yet
is it possible to resolve that error or it doesn't work for this geometry? although , the other options also don't work and i found onlyYee
method can be used forrz
geometry