SmileiPIC / Smilei

Particle-in-cell code for plasma simulation
https://smileipic.github.io/Smilei
335 stars 119 forks source link

Get Bl and Br from TrackParticles (AMcylindrical simulation) #612

Closed noreasonatall1 closed 1 year ago

noreasonatall1 commented 1 year ago

Hi! I was wondering whether it is possible to get longitudinal and radial fields (like Bl_mode_0, Br_mode_0, etc.) as a TrackParticles output during AMcylindrical simulations, instead of Bz, Bx, By, etc. From the documentation it looks like it is not possible, but maybe you've made some updates?

Thank you in advance!

beck-llr commented 1 year ago

Hi I am not sure to understand what you want. Bl_mode_1 for instance is a complex number function of x and r. Whereas the fields given in the TrackParticles diagnostics are real numbers, functions of the particles positions x,y and z.

Anyways I am sorry to say that the TrackParticles always give the field that is used to push the particles and that is always Bx, By and Bz (including contributions of all modes).

noreasonatall1 commented 1 year ago

Well, I do not fully understand it myself (I've never done simulations in AM geometry), but I'll try to explain it. I was looking at the recent arXiv preprint (arXiv:2303.12874) and on page 4 they write down the expression (16) for calculating work done on particle by plasma and laser respectively. It's the usual integral of scalar product of E and v, but with a twist. Instead of Ex and Etransverse to calculate work of plasma and laser (which in many cases is not correct) they suggest using E0 and E1 - fields of m=0 and m=1 modes respectively. There E0 and E1 are definitely real quantities of two decomposed modes.

This lead me to question whether it is possible to get these fields from TrackParticles and calculate work the same way. From your answer it's clear that it's not. Do you think that you can possibly add this feature? AM modes really seem to vastly simplify the calculation of work for the cases where strong longitudinal laser fields are present. As far as I'm aware, the only other way to distinguish laser and plasma works correctly is to do a Helmholtz decomposition, which requires insane amount of data to be saved first.

mccoys commented 1 year ago

It sounds like this would require a dedicated diagnostic. Calculating the work means integrating for every timestep. This could be too much data for processing so better do it directly during the simulation.

Technically, it could be done inside the current trackparticles diagnostic, but time integration means to rethink a bit the principle

Z10Frank commented 1 year ago

I agree that the evaluation of Eq. 16 of that preprint would probably require a dedicated diagnostic.

However, with the available diagnostics you can "easily" compute the result of Eq. 17 (the part without the discrete mode summation), which gives an estimate of Eq. 16 according to the preprint.

It would require to have data at many timesteps, so to avoid saturating your memory you should try to select one or few particles.

noreasonatall1 commented 1 year ago

So there is no way to get out the fields of two modes without creating a separate diagnostic? Could it be a feature request then?

About Eq. 17 - it has it's problems and can be waaay off in my experience. Not to mention that it's not applicable if you have any reflection involved. Even the approximate ratio of works in Eq. 20 holds in specific circumstances, which are roughly met in problems that I study.

That is why I was so interested to try the AM modes for calculationg work in the first place. And yes, the calculation of work from the fields is always tedious, especially if there is ionization involved (think of >10 TB of data), but manageable. However, if you can wrap it inside the SMILEI, I think it would be appreciated by many people.

mccoys commented 1 year ago

Technically, getting fields from specific modes requires a bit of work as the interpolators make the sum over all modes. New interpolators have to be coded. Additionally, the diagnostic cannot currently handle complex data. So yes it is a feature request, and not a very trivial one.

I am wondering whether this could be more suited to particle binning diagnostics. If we had the fields at particle positions, the work could be calculated for each particle then summed. The time_average argument would take care about the temporal integral. Does that make sense or am I wrong?

noreasonatall1 commented 1 year ago

Well, I can see a case of ParticleBnning being useful to visualize works. For example, doung something like this would just be a question of visualizing the diagnostic and not calculating works for a small bunch of particles in 10 TB of data. изображение However, if you would calculate works in ParticleBnning, wouldn't it be trivial to add them as an output of TrackParticles as well? From your answers I understood that i would be difficult to calculate work from AM modes separately. Ok, let's forget about it and focus on the fields that are easily obtained, like Ex, Ey. Could you possibly add the output of work calculated based on these fields (in Cartesian geometry)?

mccoys commented 1 year ago

Whether it is in ParticleBinning or TrackParticles really changes completely the way it would be done. That is why I am asking what the best option would be. In both cases, though, it requires some thinking and significant changes. There are technical reasons for that

noreasonatall1 commented 1 year ago

Oh, ok. In my opinion ParticleBinning would be the most useful.

Currently work from TrackParticles can be calculated during postprocessing. However, one needs to write data frequently in time, as particles usually rapidly oscillate in plasma waves at the beginning -> the integral diverges. This greatly increases data size and limits the possibility of processing large samples of particles. ParticleBinning with would not have this issue.

mccoys commented 1 year ago

Ok. Note that ParticleBinning, by definition, does not provide individual particle data. What would be the diagnostic like? I guess the deposited_quantity would be weight x work. But which axes?

noreasonatall1 commented 1 year ago

Yeah, weight x work makes sense as a deposited_quantity, but there would have to be 3 of them (for Cartesian 3D), similar to weight_p. Axes should definitely include x,y,z - to see where the acceleration (work) takes place and Ekin/gamma - to filter out high energy particles.

I believe axes may also include particle id, which may be useful? As an option to save the ids of only a certain group of particles, and then track them on second run. But I've never used that and not completely sure how it would work: if particles get their id only if they pass some filter, then they should aquire different ids in ParticleBinning (first run) and TrackPaticles (second run)?

mccoys commented 1 year ago

About ids i have no clue if it would work. It sounds really tricky.

Maybe the best way would be to attach wx, wy, wz to all particles as an optional quantity that is accumulated at every timestep. This quantity would be accessible by both diagnostics as are the other quantities px, py, etc. This technique would use somewhat significant memory, but it is the price to pay.

noreasonatall1 commented 1 year ago

That is what I was wondering couple of responses earlier - if you have to calculate works anyway, why not add them as an output for both diagnostics? From user perspective that would be ideal. I agree that is should be optional, in order to not delay runs where it is not of interest.

mccoys commented 1 year ago

I am now wondering whether it would be better to save the fields instead of the works. For instance, if we have the fields Ex, Ey, Ez at the particle position, then we can simply use (px Ex + py Ey + pz Ez)/(1+px^2+py^2+pz^2) to have the total work. Does that make sense? Is there any loss of information that I am missing?

The nice thing is that we can imagine calculating more stuff than just the work.

Now, this would not provide integration by default, but, at least in the ParticleBinning diag, the time_average option could do that for periods of time.

Is this ok? Or is it better to have directly the integral of work hardcoded from the beginning?

noreasonatall1 commented 1 year ago

This makes total sense until you start actually analyzing the works :) I think it is better to calculate and save works themselves, and here is why.

As I mentioned earlier - there may be a problem of works diverging from particle energy, i.e. Wx+Wy+Wz != 1+px^2+py^2+pz^2. This happens for many particles at the stage where they are trapped in plasma wave - oscillations are rapid and if time sampling is not frequent enough integration will be incorrect. Here's an example (from TrackParticles) with 6 points per laser period sampling (and 1/36 laser period timestep) изображение As you can see 6 points are clearly not enough. To avoid that works have to be calculated at every timestep. If the option is to save px,py,pz,Ex,Ey,Ez - you need to save them at EVERY timestep for the calculation to be correct -> large outputs. There may be a case of saving px,py,pz,Ex,Ey,Ez between data saving points -> calculating work or something else -> deleting temporary data. If works Wx,Wy,Wz are saved I can imagine some Wx_temp (for example) variable that is incremented every timestep, but flushed to output only when needed -> much simpler, greatly reduces output size while keeping it correct.

So in principle for works you need to have 3*N variables for N particles, while for p and E - 2*3*N*(number of timesteps between data saving points?). I hope that you can see my point for coding specifically Wx,Wy,Wz. In any case, I would still add a check for Wx+Wy+Wz = 1+px^2+py^2+pz^2 though.

mccoys commented 1 year ago

I was not clear in my previous comment. I should not have said "save". I meant that the Ex, Ey and Ez could be available in the diagnostic. It would be kind of useless in TrackParticles indeed, as saving this to disk is too costly. However, in ParticleBinning, the time_average argument could be used to have an integral over some period of time.

noreasonatall1 commented 1 year ago

Well, they are already avaliable in TrackParticles, and as for ParticleBinning - I have no idea what access to fields might be useful for there. And I think this can already be done with Fields diagnostic (it also has time_average + subgrid), or am I missing something?

Maybe we misunderstand each other quite a bit here. My overall idea is that it would be really useful to be able to have works as an output of TrackParticles and ParticleBinning. The simplest realization in my opinion is to add incrementing values of W for each particle and add these quantity to diagnostics outputs.

Also this is quite a far cry from my original question, so I should I change the title?

mccoys commented 1 year ago

No you can keep the title. The thing is we have to go step by step. To have the works, we first need the fields anyways. So as a first step i am considering adding access to fields, possibly with a special case for AM with all modes. Then we can move on to accumulating works

noreasonatall1 commented 1 year ago

Hi! Just wanted to ask how's the adding fields/works going? Do you need any more input from me at this stage?

mccoys commented 1 year ago

I managed to have a branch working with Ex Ey Ez Bx By Bz fields available in particle binning diagnostics. Now to accumulate the works is a bit more complicated as these must be saved and communicated when patches move to other MPIs or when there is a checkpoint. I am trying also to make this change clever in a way it could benefit other applications that were considered before: like calculating high-energy radiation from particle movement.

To have the fields expressed in cylindrical coordinates is more involved as it requires new interpolators. This means potentially a larger computation time and more memory usage.

noreasonatall1 commented 1 year ago

Wow, sounds great! And what about works in TrackParticles? Btw, I'm open to do some beta-testing if necessary.

mccoys commented 1 year ago

I don't really know what to do in TrackParticles. The issue is the same because we would need new interpolators for all the modes.

noreasonatall1 commented 1 year ago

I meant TrackParticles in regular Cartesian geometry. It already has all of the fields + velocities (momenta). So adding work as an accumulating quantity there would be relatively easy?

mccoys commented 1 year ago

I see. In that case, I imagine it would be easier to use the same tool as for particle binning and simply make TrackParticles access the same data.

mccoys commented 1 year ago

I pushed, in the develop branch, a new option to have works Wx Wy and Wz available in binning and tracking diags

noreasonatall1 commented 1 year ago

This is wonderful news! I wanted to test it out, but was met with some errors. For binning it just says DiagParticleBinning #2, axis 3: Wx unknown. For tracks the error is DiagTrackParticles #0: attribute Wx requires keep_interpolated_fields to include 'Wx

I cloned the branch with git clone -b develop https://github.com/SmileiPIC/Smilei.git

mccoys commented 1 year ago

The error is self-explanatory, but it refers to documentation that is not yet published. You can produce your own version of the documentation using make doc (assuming you have the sphinx module in your python) then open build/html/index.html.

Anyways, you should specify, in the necessary Species, the argument keep_interpolated_fields = ["Wx"]

noreasonatall1 commented 1 year ago

Thank you for the explanation, I've never created the docs locally before, the website was sufficient :) Now the simulation is running correctly and should finish in ~16 hours. I used the same parameters as for manual work calculation previously, so the direct comparison will be possible. I will let you know the results (hopefully) tomorrow.

noreasonatall1 commented 1 year ago

UPD: it crushed

    608/9360     1.0620e+02     1.0047e+03   (  2.6728e+01 )            4530
Stack trace (most recent call last) in thread 2151447:
#7    Object "[0xffffffffffffffff]", at 0xffffffffffffffff, in
#6    Object "/lib/x86_64-linux-gnu/libc.so.6", at 0x7fe67c6869ff, in
#5    Object "/lib/x86_64-linux-gnu/libc.so.6", at 0x7fe67c5f4b42, in
#4    Object "/lib/x86_64-linux-gnu/libgomp.so.1", at 0x7fe67c7c5b9d, in
#3    Object "./smilei", at 0x5556cf36606b, in
#2    Object "./smilei", at 0x5556cf20fb10, in VectorPatch::runAllDiags(Params&, SmileiMPI*, unsigned int, Timers&, SimWindow*)
#1    Object "./smilei", at 0x5556ceebed48, in DiagnosticTrack::run(SmileiMPI*, VectorPatch&, int, SimWindow*, Timers&)
#0    Object "./smilei", at 0x5556ceec5b06, in void DiagnosticTrack::fill_buffer<double>(VectorPatch&, unsigned int, std::vector<double, std::allocator<double> >&)
Segmentation fault (Address not mapped to object [(nil)])
--------------------------------------------------------------------------
Primary job  terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpirun noticed that process rank 0 with PID 0 on node solver exited on signal 11 (Segmentation fault).

UPD2: I feel like the crush is somehow caused by ionization? I disabled it, changed the target to hydrogen plasma and everything is ok (it passed the 1000 timestep). Note that at these timesteps I write every particle in TrackParticles (particles.x>0 filter), so no "cropped" tracks should occur.

mccoys commented 1 year ago

I did not think about ionization. It creates particles that change the arrays so that the new fields are probably not well constructed in that case. I will look into this

mccoys commented 1 year ago

@noreasonatall1 I fixed the segfault due to ionization in the develop branch

noreasonatall1 commented 1 year ago

Ok. I recompiled Smilei and restarted the simulation, everything works well as of now (~20% of simulation time has passed). I will update you when I'll get the results.

noreasonatall1 commented 1 year ago

The simulation crushes again. I restarted it several times, because at first the error was unclear and I attributed it to some server troubles.

   3952/9360     6.8984e+02     1.4295e+04   (  8.0393e+01 )            6502
--------------------------------------------------------------------------
Primary job  terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpirun noticed that process rank 0 with PID 0 on node solver exited on signal 11 (Segmentation fault).
--------------------------------------------------------------------------

But at thirt restart it changed to:

   3952/9360     6.8984e+02     4.5717e+03   (  7.9139e+01 )            6401
Stack trace (most recent call last) in thread 900836:
#8    Object "[0xffffffffffffffff]", at 0xffffffffffffffff, in
#7    Object "/lib/x86_64-linux-gnu/libc.so.6", at 0x7fea63e699ff, in
#6    Object "/lib/x86_64-linux-gnu/libc.so.6", at 0x7fea63dd7b42, in
#5    Object "/lib/x86_64-linux-gnu/libgomp.so.1", at 0x7fea63fa8b9d, in
#4    Object "./smilei", at 0x5610c1f1655d, in
#3    Object "./smilei", at 0x5610c1dd6b8e, in VectorPatch::dynamics(Params&, SmileiMPI*, SimWindow*, RadiationTables&, MultiphotonBreitWheelerTables&, double, Timers&, int)
#2    Object "./smilei", at 0x5610c1dd69b3, in VectorPatch::dynamicsWithoutTasks(Params&, SmileiMPI*, SimWindow*, RadiationTables&, MultiphotonBreitWheelerTables&, double, Timers&, int)
#1    Object "./smilei", at 0x5610c1f35401, in Species::dynamics(double, unsigned int, ElectroMagn*, Params&, bool, PartWalls*, Patch*, SmileiMPI*, RadiationTables&, MultiphotonBreitWheelerTables&)
#0    Object "./smilei", at 0x5610c1eb3e3b, in Projector3D2Order::basic(double*, Particles&, unsigned int, unsigned int, int)
Segmentation fault (Address not mapped to object [0x7ff10217d4f8])
--------------------------------------------------------------------------
Primary job  terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpirun noticed that process rank 0 with PID 0 on node solver exited on signal 11 (Segmentation fault).

It's halfway done, so I will take a look at the results, but yeah - something does not work.

mccoys commented 1 year ago

Are you sure this is related to the new option? This looks like maybe the cfl condition is not satisfied

mccoys commented 1 year ago

Note also that your push time is very high. You may have a very bad configuration in terms of processors and number of patches

noreasonatall1 commented 1 year ago

Yes, as this simulation was done many times on the stable Smilei version. That is why I chose it to test the new functions, the only thing changed is the executable. CFL is met, I have lambda/32 in the x direction, lambda/4 in the y and z, and t/36 timestep.

Hm, I will look into it, as the parallelization parameters were first (and last) tested ~2 years ago. But I run Smilei on single old machine (HP DL380 Gen9) which is routinely used by multiple people at once, so probably not much can be improved there.

noreasonatall1 commented 1 year ago

My colleague also has the same crush (his simulation is in AMCylindrical + on different server). In his simulation it happens around the time when particles start to actively leave the simulation window after wavebreaking.

Stack trace (most recent call last) in thread 76951:
#7    Object "[0xffffffffffffffff]", at 0xffffffffffffffff, in
#6    Object "/lib/x86_64-linux-gnu/libc.so.6", at 0x7fa157cba4ce, in clone
#5    Object "/lib/x86_64-linux-gnu/libpthread.so.0", at 0x7fa15820ffa2, in
#4    Object "/usr/lib/x86_64-linux-gnu/libgomp.so.1", at 0x7fa157db279d, in
#3    Object "./smilei", at 0x55dc4a95d459, in
#2    Object "./smilei", at 0x55dc4a836903, in VectorPatch::dynamics(Params&, SmileiMPI*, SimWindow*, RadiationTables&, MultiphotonBreitWheelerTables&, double, Timers&, int)
#1    Object "./smilei", at 0x55dc4a9788c8, in Species::dynamics(double, unsigned int, ElectroMagn*, Params&, bool, PartWalls*, Patch*, SmileiMPI*, RadiationTables&, MultiphotonBreitWheelerTables&)
#0    Object "./smilei", at 0x55dc4a708574, in InterpolatorAM2Order::fieldsWrapper(ElectroMagn*, Particles&, SmileiMPI*, int*, int*, int, unsigned int, int)
Segmentation fault (Address not mapped to object [0x135])
--------------------------------------------------------------------------
Primary job  terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpirun noticed that process rank 0 with PID 0 on node solver5 exited on signal 11 (Segmentation fault).
-------------------------------------------------------------------------
mccoys commented 1 year ago

Is it possible to share your input file so I can try to reproduce your bug?

noreasonatall1 commented 1 year ago

Hi! Sorry for the late response - I had analyzed the works from the simulation above and found something fascinating.

In this thread I had already mentioned that routinely Wx+Wy+Wz !=gamma-1. It is still the case for the new diagnostic. I looked at TrackParticles (output written every timestep) and two different particle behaviors: ionization injection i.e. gamma>1 almost immediately, and wavebreaking i.e. particle oscillates for the long time with gamma~1.

First case - bad but tolerable. If the simulation went further the error would probably be 10-20% ionization Second case - speaks for itself: wavebreaking

So I decided to compare the results to calculations from p and E outputs of TrackParticles. Doing the calculation the straightforward way with -v[i]*E[i] is still bad (see graphs).

However, as far as I understand the standard Boris pusher has fields evaluated at [i+dt] and velocities at [i+dt/2]. At what timesteps they are written in TrackParticles output? I assumed that you just used already evaluated stuff, which would slightly change the phase shift between v and E from pi/2 in the oscillatory part of trajectory -> integral diverges from zero. I tried to mitigate that by using not E but (E[i-1]+E[i])/2 in manual calculations and it worked really well.

First case - on par with new diagnostic: ionization2

Second case - major improvements: wavebreaking2 Closer look at the phase shift for Y components - clearly not pi/2: wavebreaking31

Therefore E and v should be evaluated at the same time for work to be calculated correctly. Why it is still !=gamma-1 - no idea at this point. I used the E[i-dt/2] treatment on different simulation and the final error was <10%, which I would personally consider good to work with.

I use h5 file for ion density, which is ~500 MB, so I put thie inputs on GoogleDrive. However if you're going to run it as is be vary of TrackParticles file size - it is 2.7 TB.

mccoys commented 1 year ago

I am confused. I did check the equality between work and g-1 on some trajectories. It could be related to ionization I guess

noreasonatall1 commented 1 year ago

Why would ionization change anything? It's still the same tracks, although with NaN in the beginning. No, obviously on some tracks they look more simular (~5% difference), but I could not find any W=gamma-1 tracks in my simulation. Without ionization they were fully equal?

Also I'm still curious to

  1. which timesteps are used to calculate works in the new diagnostic
  2. at which timesteps are px,y,z and Ex,y,z evaluated in TrackParticles
  3. how are fields interpolated on particles? Is the interpolation linear/spline/etc.?
  4. are the field values in the Tracks output the same that are used in numerical scheme, or is there a different interpolation procedure?

I think I'm mostly confused as to why none of the approaches fully converges with gamma.

mccoys commented 1 year ago

When i have the chance i will find out answers to these questions. But I tried to make the calculation of the works precise enough. (The E and v from track particles are probably not at the right timestep for a good calculation.)

I did not include ionization though. This can affect the calculation of energy for ions, but I don't see a good reason for electrons

mccoys commented 1 year ago

Ah. I did not test what happens in AM. This should be investigated

noreasonatall1 commented 1 year ago

I can give you input files for AM + MW simulation, that also crushes.

Z10Frank commented 1 year ago

Hello, I see you are using maxwell_solver = 'Lehe',, which can make some superluminal waves, was it to reduce numerical Cherenkov radiation? Can you try also with a Yee solver? Also, you are using 32 points per laser wavelength, the interpolation errors due to staggering of EB fields with relativistic particles in LWFA can increase when the mesh is not fine enough in the x direction: I know it can increase the simulation time, but can you try using 40 points or even 50 points per laser wavelength? Like this you should be also able to increase the ratio dt/dx, which improves dispersion and reduces staggering errors.

noreasonatall1 commented 1 year ago

Yes, it was to get rid of Cherenkov, however, in this simulation the difference between Yee and Lehe is negligible - I can switch solvers no problem.

You mean 40 or 50 points in the X direction? Why would it increase dt/dx ratio? For 50 points/lambda in the X and 4 points/lambda in the Y,Z I would need to have at least dt~51. So dt/dx would decrease from 36/32 to 51/50. Obviously dt can be finer, I'm just trying to understand your suggestion.

Z10Frank commented 1 year ago

In your namelist you have

l0 = 2.*math.pi         # laser wavelength
t0 = l0                 # optical cycle
resyz = 4.      # nb of cells in on laser wavelength
resx= 32.
rest = 36.          # time of timestep in one optical cycle 
... 
cell_length = [l0/resx,l0/resyz,l0/resyz],
timestep = t0/rest,

In other words, dx=l0/32,dy=dz=l0/4,dt=l0/36=32/36.*dx, or dt~0.89dx. This respects the CFL condition of the Yee solver in 3D, i.e. dt/dx<1/math.sqrt(1/dx/dx+1/dy/dy+1/dz/dz), or dt/dx<0.985. You can also try to increase the dt to have it nearer to the CFL limit in your case and see if the error is reduced.

If you change to dx=l0/40, the new condition will be dt/dx<0.99, this will allow to increase further the dt in case you want to try a finer mesh.

I don't know if this will significantly change the results, but maybe it's a test that can be explored.

noreasonatall1 commented 1 year ago

Oops, I was so used to think in terms of number of cells, I forgot that they are inversed here. Got it now. I will increase resolution and see what happens, but it will take couple of days at least.

mccoys commented 1 year ago

I found a bug in the process of writing Wx, Wy and Wz in tracked particles for some cases. It might apply to your case, so please try the updated version.

This is a test on some electron trajectories. It works very well, although there is a very small discrepancy likely due to machine precision. image

noreasonatall1 commented 1 year ago

I did try the updated version with no success - there are still trajectories, where the new diagnostic does great, okay and awful all in the same simulation. Here I had also changed EM solver to Yee. Figure_1 Figure_2 Figure_3

I had also tried to refine the mesh, as @Z10Frank suggested (+ Yee solver), but simulation crushed anyway. I had lambda/48 in the X direction and tau/50 timestep.


    3584/12999     4.5044e+02     1.5037e+04   (  8.4895e+01 )            6365
munmap_chunk(): invalid pointer
Stack trace (most recent call last):
#16   Object "[0xffffffffffffffff]", at 0xffffffffffffffff, in
#15   Object "./smilei", at 0x562ee117a554, in _start
#14   Object "/lib/x86_64-linux-gnu/libc.so.6", at 0x7f2522685e3f, in __libc_start_main
#13   Object "/lib/x86_64-linux-gnu/libc.so.6", at 0x7f2522685d8f, in
#12   Object "./smilei", at 0x562ee1178954, in main
#11   Object "/lib/x86_64-linux-gnu/libgomp.so.1", at 0x7f25228b8a15, in GOMP_parallel
#10   Object "./smilei", at 0x562ee1656146, in
#9    Object "./smilei", at 0x562ee14f74e1, in VectorPatch::cleanParticlesOverhead(Params&, Timers&, int)
#8    Object "./smilei", at 0x562ee151c236, in Patch::cleanParticlesOverhead(Params&)
#7    Object "./smilei", at 0x562ee14de67e, in Particles::shrinkToFit(bool)
#6    Object "/lib/x86_64-linux-gnu/libc.so.6", at 0x7f2522701519, in free
#5    Object "/lib/x86_64-linux-gnu/libc.so.6", at 0x7f25226fd05b, in
#4    Object "/lib/x86_64-linux-gnu/libc.so.6", at 0x7f25226fcd7b, in
#3    Object "/lib/x86_64-linux-gnu/libc.so.6", at 0x7f25226e56f5, in
#2    Object "/lib/x86_64-linux-gnu/libc.so.6", at 0x7f25226847f2, in abort
#1    Object "/lib/x86_64-linux-gnu/libc.so.6", at 0x7f252269e475, in raise
#0    Object "/lib/x86_64-linux-gnu/libc.so.6", at 0x7f25226f2a7c, in pthread_kill
Aborted (Signal sent by tkill() 3735053 1000)
--------------------------------------------------------------------------
Primary job  terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpirun noticed that process rank 0 with PID 0 on node solver exited on signal 6 (Aborted).