ComputationalRadiationPhysics / picongpu

Performance-Portable Particle-in-Cell Simulations for the Exascale Era :sparkles:
https://picongpu.readthedocs.io
Other
694 stars 218 forks source link

omega_p * dt <= 0.1 condition for solid target ion acceleration #3160

Closed IoannisTazes closed 4 years ago

IoannisTazes commented 4 years ago

Hi all! I am trying to simulate proton acceleration from the interaction of an intense 25fs FWHM gaussian pulse (a0 = 8.5), 800nm main wavelength, with a 13micron 10 times ionized Al target with long scale length plasma expanding from the front and the rear of the target. the electron density is 350nc, and my grid is:

/
** equals X
         *  unit: meter */
        constexpr float_64 CELL_WIDTH_SI = 22e-6/1792;
        /** equals Y - the laser & moving window propagation direction
         *  unit: meter */
        constexpr float_64 CELL_HEIGHT_SI = 60e-6/6144;

in the stdout file i get this:

PIConGPUVerbose PHYSICS(1) | Courant c*dt <= 1.51771 ? 1
PIConGPUVerbose PHYSICS(1) | species e: omega_p * dt <= 0.1 ? 1.31958
PIConGPUVerbose PHYSICS(1) | species H: omega_p * dt <= 0.1 ? 0.0307951
PIConGPUVerbose PHYSICS(1) | species Al: omega_p * dt <= 0.1 ? 0.0592854
PIConGPUVerbose PHYSICS(1) | y-cells per wavelength: 51.4286
PIConGPUVerbose PHYSICS(1) | macro particles per device: 276480000
PIConGPUVerbose PHYSICS(1) | typical macro particle weighting: 1.89679e+06

All conditions seems to be fine, except

e: omega_p * dt <= 0.1 ? 1.31958

which in order to resolve i think i will need at lest 10 time higher resolution like:

constexpr float_64 CELL_WIDTH_SI = 22e-6/17920;

constexpr float_64 CELL_HEIGHT_SI = 60e-6/61440;

that i dont think i will be able to do. Is this going to highly affect my simulation results or im i good to go? I also observed in the simulations with the resolution i used, that up to 200fs the protons reveal reasonable energy spectrum but beyond that they seem keep accelerating to very high energies. Thank you in advance!

sbastrakov commented 4 years ago

@PrometheusPi @n01r @steindev could you comment?

n01r commented 4 years ago

Hi @IoannisTazes, welcome and thanks for your question!

Could you tell us a few more things about the system you are running on and the maximum memory available to you? Are you trying to run in 2D or 3D? (The macro particle weighting looks a bit high, too.) Are you already using the Memory Calculator utility that ships with the code? It will help you to come up with estimates beforehand whether or not your system will be able to run the simulation.

Now addressing your question, the omega_pe * dt <= 0.1 is a notoriously hard condition to fulfill which is true especially for solid density targets. You can, of course, only run a simulation that fits on your system and in order to do that you have to sacrifice resolution or box size sometimes. (Although for certain setups, when reducing dimensionality or other things does not work or answer your question to the simulation satisfactorily, you might have to apply somewhere for more resources.) I would suggest applying the following strategy:

  1. Try to resolve the plasma wavelength of the electrons at least spatially
    With an intense laser pulse of a0 > 1 you can expect some steepening of the initial electron density up to a factor of 2 or 3.
    Make sure that you have enough spatial steps to resolve the plasma oscillation in space for this case.
  2. I would recommend to use at least 12 cells per plasma wavelength (@ 350 n_c) in 2D.
    That means sqrt(1/350) * 800 / 12 which is roughly 3.5nm.
    You should be resilient towards the changes in density and strong changes in electric field within the target.
  3. Now this might already blow up your capacities, so for the first runs consider using a thin slice of the target along the beam axis and set the transverse boundaries to periodic, as well as the laser profile to plane wave.
    You will be emulating an on-axis scenario.
  4. With this strategy, you will fulfill omega_pe = 2pi * c / (12dx) = 2pi / (12*sqrt(2)*dt) which gets you omega_pe * dt = 0.37 and with 17 time steps per electron plasma period that seems a comfortably reasonable resolution in time as well.
  5. Normally, you would not want to make dx and dy (and dz) different in length in a solid density setup but in the plane wave, on-axis scenario it is not as critical because plasma waves that propagate transversely to the beam axis should not occur as much.
    If you have trouble meeting the conditions otherwise, consider to do this.
  6. From this scenario you run the simulation and see how your plasma and particle energies evolve.
  7. Since you will likely still be interested in a larger box to learn more about the directionality of your electrons and other particles, decrease resolution and work your way up to larger box sizes. But always look for strange effects, like unreasonable increases in energy (electrons within the target, mostly - you can use the Particle Energy Plugin for that) or ultrafast ionization (if you have ionization activated) where charge state fronts travel with unreasonably high speed through the target
  8. It is okay to run slightly underresolved simulations (because you're likely forced to) but try to double-check with a scenario that resolves the dynamics much better. Just make sure to always question the results of the underresolved simulations.
  9. In 2D scenarios, try to have 20-100+ electrons per cell, if you can. In 3D scenarios this number can be reduced, due to the thir dimension providing more particles.
  10. Be aware that the final ion energies in 2D will be larger than the 3D result by usually at least a factor of 1.5-2.

Alright, that was a bit long. I hope that helps you to get started. Please keep asking if you require more assistance.

IoannisTazes commented 4 years ago

Hi @n01r and thank you for the answer! I can use up to 16 x NVIDIA Tesla k40m” accelerated nodes. I am running currently running 2D and 3D in the future. I will try the strategy you suggested, starting from using 12 cells per plasma wavelength + periodic transverse boundaries, and let you know what gives! An additional question related to the subject. I have initialized my plasma target with:

constexpr float_64 BASE_DENSITY_SI = 350 * nc;

and then i have set the other species ratios as

/ ratio relative to BASE_DENSITY (n_e) / value_identifier( float_X, DensityRatioAluminium, 1. / 10. );

and

/ ratio relative to BASE_DENSITY (n_e) / value_identifier( float_X, DensityRatioHydrogen, 1. / 100. );

would it make any deference if i had set as BASE_dENSITY the ionized Al density (35nc):

constexpr float_64 BASE_DENSITY_SI = 35 * nc;

and fix the other particles spices denstity ratios based to the Al BASE_DENSITY, like n_e=10BASE_DENISTY instead of n_Al=1/10BASE_DENSITY? (I hope it is clear what i mean) I think it shouldn't but this confused me:

 struct LogOmegaP
 {
        void operator()()
        {
            using FrameType = typename T_Species::FrameType;
            const float_32 charge = frame::getCharge<FrameType>();
            const float_32 mass = frame::getMass<FrameType>();
            log<picLog::PHYSICS >("species %2%: omega_p * dt <= 0.1 ? %1%") %
                                 (sqrt(BASE_DENSITY *  charge / mass * charge / EPS0) * DELTA_T) %
                                  FrameType::getName();
        }
};

thank you in advance!

n01r commented 4 years ago

and let you know what gives!

Great! :) Yea, also check values like average electron kinetic energy evolution over time in an on-axis region for a benchmark value that you can check your later, larger but less-resolved simulations against. Try to make measurements that you can use to check if your simulations are then behaving in a reasonable way.

Setting a different BASE_DENSITY and adjusting the density ratios works just as well. The idea here is that you find a common base density among all your species, such that, internally, when densities are calculated in our PIC units we do not have values that are extremely far away from 1. The reason is numerical stability and precision. When we add up the contribution of a particle or particle density to a quantity and the two quantities are very different in order of magnitude, we may lose the smaller contribution entirely due to floating-point precision. Normalizing our densities to a common value is aimed at avoiding this problem.

I think it shouldn't but this confused me

Hmm, so all of the quantities up there are already in PIC units but it seems that the density ratio value is missing to make the value the log prints be valid for all species. Can you confirm, @sbastrakov, @steindev, @PrometheusPi?

The plasma frequency is

sqrt(species_density * species_charge^2 / (epsilon_0 * species_mass))

But even when I follow the substitutions of the quantities towards PIC units I cannot seem to find a density_ratio appearing, anywhere.

sbastrakov commented 4 years ago

@n01r to me it looks you are right, and we should have indeed multiplied the value under the square root by the density ratio of a species, traits::GetDensityRatio< T_Species >::type::getValue( )

IoannisTazes commented 4 years ago

hey guys! and thanx again for the answers! so to be clear cause i am still a bit comfused, since the density ratio isn't there, does this mean that by setting as BASE_DENSITY the Aluminum ion density (witch is 10 times lower than the electron density in my case of 10times ionized Al) you reduse the omega_p by a factor of sqrt(10) and so forth the

omega_p * dt <= 0.1

is easier to be resolved? and if so i suppose using any other species density as BASE_DENSITY except of electrons should be avoided? thank you in advance again!

PrometheusPi commented 4 years ago

Yes, if your density (of ionized Aliminium electrons) is 10x smaller than BASE_DENSITY, you sample the plasma dynamic sqrt(10) times better. (In the end, the entire electron density is relevant - but in your case the extra electrons from Hydrogen can be neglected. )

sbastrakov commented 4 years ago

@IoannisTazes yes, so it looks like this particular output mistakenly uses the base density for all species. Please note that in other places, such as generation of macroparticles, the density ratios seem to be properly in place. So this should only affect the output. We will fix it once it's fully confirmed.

IoannisTazes commented 4 years ago

So i should not take advantage of that and set my BASE_DENSITY = 35nc (alumunum ions) in order to easier satisfy the omega_p dt <= 0.1 condition, and set it as BASE_DENSITY = 350nc (electrons) or even BASE_DENSITY = 353.5nc (including the hydrogen electrons) in order to have realistic simulation results? Sorry for the train of questions, but i need to be defiantly sure before i start my next simulation :)

PrometheusPi commented 4 years ago

@IoannisTazes The BASE_DENSITY has no physical meaning - it is just a reference value. It does not matter (physically) whether you apply a factor 1/10th to your BASE_DENSITY and thus not a factor 1/10th in your density manipulation.

In the end, the total density is what need to be resolved. If your peak density during setup is 35.35n_c that is the value you need to resolve. If your peak density is however 353.5n_c - then you need a higher resolution. The warning you are seeing is just caused by the fact, that the value to calculate the check is only the BASE_DENSITY value - no factors applied.

The only reason we normalize all our densities to BASE_DENSITY is that we reduce numerical floating point errors.

@IoannisTazes Because I got confused a bit with both Hydrogen and Aluminum: What is the electron density of both combined (35.35n_c or 353.5n_c)?

IoannisTazes commented 4 years ago

Oh sorry my mistake, my I got 10 electrons per Al atom and 1 per hydrogen although hydrogen density is 10 times lower than Al resulting to ne=3510nc from Al + nh=1/1035nc resulting to 353,5nc

On Mon, 10 Feb 2020, 18:48 Richard Pausch, notifications@github.com wrote:

@IoannisTazes https://github.com/IoannisTazes The BASE_DENSITY has no physical meaning - it is just a reference value. It does not matter (physically) whether you apply a factor 1/10th to your BASE_DENSITY and thus not a factor 1/10th in your density manipulation.

In the end, the total density is what need to be resolved. If your peak density during setup is 35.35n_c that is the value you need to resolve. If your peak density is however 353.5n_c - then you need a higher resolution. The warning you are seeing is just caused by the fact, that the value to calculate the check is only the BASE_DENSITY value - no factors applied.

The only reason we normalize all our densities to BASE_DENSITY is that we reduce numerical floating point errors.

@IoannisTazes https://github.com/IoannisTazes Because I got confused a bit with both Hydrogen and Aluminum: What is the electron density of both combined (35.35n_c or 353.5n_c)?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/ComputationalRadiationPhysics/picongpu/issues/3160?email_source=notifications&email_token=ANL54KPE5VO73QMD5J4PXU3RCGAOBA5CNFSM4KSLJXH2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOELJHM6Q#issuecomment-584218234, or unsubscribe https://github.com/notifications/unsubscribe-auth/ANL54KODYJLSPWTBG7XGYFTRCGAOBANCNFSM4KSLJXHQ .

n01r commented 4 years ago

Right - just to repeat it again to make this crystal clear: the mistake you found @IoannisTazes (good find!) was really just in the old log messages that are written on simulation start. This was completely independent from what the real plasma frequency in the simulation is. These values there in the log were designed as quick indicators for users to double-check if the simulation is configured correctly. The main place to actually check this, is in the inputs and data outputs, however. So make sure to check if the densities you configure appear in the outputs just as you expect them. The guidelines I wrote above still apply for setting up the simulation. So yea, you need to resolve the plasma frequency for 353nc well in your case ;) no cheating possible there. :laughing:

Again, thanks for finding that bug! :tada:

n01r commented 4 years ago

Ah, I also just saw that in your previous setup you were relatively far away with your CFL number.

Courant c*dt <= 1.51771 ? 1

Try to stay as close to unity as possible (without risking it to become unity, exactly), since this is numerically the most stable situation.

That means, your time step is coupled to your cell size. For simplicity's sake, I'll assume dx = dy = dz.

You would configure:

To fulfill this, just increment sqrt(2) or sqrt(3) by 1 in the third decimal, making them 1.415 and 1.733, respectively.

sbastrakov commented 4 years ago

The omega_p calculation is now fixed and clarification added to the output by #3163. I will close this issue.

@IoannisTazes in case you have further questions you are welcome to reopen or create other issues.