ComputationalRadiationPhysics / picongpu

Performance-Portable Particle-in-Cell Simulations for the Exascale Era :sparkles:
https://picongpu.readthedocs.io
Other
690 stars 217 forks source link

density calculation #3164

Closed cbontoiu closed 4 years ago

cbontoiu commented 4 years ago

I've got some problems with the charge density calculation. In my density.param I specify a density of 13.559e+27 m^-3 and I use it for both carbon ions and electrons as they come in equal numbers. I expected to get this number when plotting e_ChargeDensity but instead I see something like 2e+09. I have three questions here: 1) What is the unit? Still m^-3 or maybe something else since the plot is a cut across the y-axis? 2) Why is the value so much lower as compared with my input? 3) Why isn't the colour smooth across the circular region. There is no randomness in charge distribution and the current time is 0; at least in the bulk core of the tubes the cell size should not matter for this calculation.

image

There is more confusion for me coming from the terminal. There are a number of 47.18 mil macroparticles per device but my calculations show that with the number of cells 32x2560x32 and 0.835 nm cell cube side my density yields to 2.89 mil particles only. The questions are:

1) If no weighting is specified why do I get the value of 0.18 as typical? 2) Do I need to do something in order to prevent any weighting as I want to simulate all particles and the concept of macro particle is not needed?

image

As far as a I know there are only two pieces of code related to weighting in my simulation. First in speciesDefinition.param:

using DefaultParticleAttributes = MakeSeq_t< position< position_pic >, momentum, weighting, particleId >;

and second in speciesInitialization.param:

ManipulateDerive< manipulators::binary::UnboundElectronsTimesWeighting, Carbon, Electrons > Thank you,

PrometheusPi commented 4 years ago

@cbontoiu Thank you for opening this issue and providing such a detailed description.

Regarding your questions 1 and 2: In principle this discrepancy can have various reasons - therefore I need to ask you some questions: 1.) How did you generate the plots above? Did you use the raw hdf5 data (need to be converted to SI units to be given in m^-3 or did you use the openPMD api (which convents automatically)? 2.) How does you speciesInitalization.param and your density.param look like (perhaps there is a number mismatch there)?

Regarding your question 3: The density is computed based on the macro-particle distribution and the macro-particle shape. I assume you chose a higher-order particle shape (default is 2nd order). Thus the density is assigned to the grid using the associated distribution shape. This smoothens out the structure.

Regarding your second paragraph: The macro-particle per device is just the particles-per-cells multiplied by the cells per device. This is just a very rough estimate that assumes homogeneous particle-distribution and no static load-balancing. For your setup with vacuum, it definitely overestimates the number of macro-particles.

Regarding your 2nd questions 1: The weighting is determined based on the density multiplied by the cell volume decided by the particles-per-cell initialized (plus some magic regarding minimal weighting - which however is not accounted here). In order to answer your question, I need the following details from you: What number do you get when you divide your BASE_DENSITY by your cell volume? How many particles do you initialize per cell? What is your minimal weighting?

Regarding your 2nd questions 2: You should set your minimal weighting to one and make sure that you initialize enough particles per cell that you have enough macro-particles to only deal with weighting=1. But please be aware: particle-in-cell codes ae based on mean-field-theory - so the macro-particles used in PIC codes should not be considered real particles in most cases. (e.g. They still have a shape that does not represent a real electron.)

cbontoiu commented 4 years ago

Thank you for your swift reply! Here are my answers and files:

  1. I used the openPMD Viewer embeded in a Jupyter notebook and the import syntax was from openpmd_viewer import OpenPMDTimeSeries.

  2. The files are attached below. There is a low MIN_WEIGHTING in particle.param but I think this is necessary to avoid killing particles as they exist the simulation domain. For me periodicity in the transverse directions (x and z) is essential as I try to simulate the effect of laser interaction with an array of tubes, not only with one. This is why I use a density like this without absorber at any of the x and z edges.

image

density.param.txt speciesDefinition.param.txt speciesInitialization.param.txt particle.param.txt grid.param.txt laser.param.txt

  1. I did not change the order for the particle shape. I remember seeing these options somewhere but I couldn't find the corresponding .param file in my source folder. It must be then the default shape type.

About The macro-particle per device should I aim to lower the number (currently 47.18 mil ) towards my estimate (currently 2.89 mil)? How to do this? Is the particle shape a key factor here? Or, is it the fine mesh size which I use (currently 0.07883nm)? The thing is that I was "forced" to reduce the mesh size when looking at the "poor resolution" fields and charge density colour maps. Another reason is that I would need to introduce X-ray lasers with wavelengths as small as 0.285 nm and maybe 0.05 nm to reach the plasma frequency of my tubes, so mesh quality is something I want to keep.

For the BASE_DENSITY this is 13.559e27 in particles/m^3 and the cell volume is (0.07883 nm)^3 = 4.898e-31 m^3 so the ratio becomes 2.76e+58 but maybe you want to know the product which is 6.642e-3 in particles. I see, now I must compare with MIN_WEIGHTING = 0.0000001 which was used so far and I think I'm fine along the idea that no particle will be removed as I have seen in the thin foil model.

Following your last sentence it seems that all I can do is to increase the cell size (I cannot change density because it is a fact) until the product density*cell_volume > 1 and set MIN_WEIGHTING = 1 as a safety threshold. Then I would need to monitor the number of particles in time and make sure it does not drop significantly. Right?

By the way, I see people using slightly larger mesh size along the laser direction. Is this safe in general?

Many thanks and kind regards, Cristian

PrometheusPi commented 4 years ago

@cbontoiu Thanks for providing your input files.

According to your param files, your cell volume is:

cossin45 = 0.70710678118655
SIDE = 2*cossin45*9.810e-09
CELL = 176 # cells per unit

delta_x = SIDE/CELL
delta_y = delta_x
delta_z = delta_x

V_cell = delta_x * delta_y * delta_z
print(V_cell)

*4.9 10^-31 m^3**.

Your density profile returns either 1.0 or 0.0. Thus, at locations were density is, you initialize BASE_DENSITY_SI of Carbon density. Combining this with you 6-particles-per-cell initialization, your Carbon macro-particle weighting should be:

nc = 13.559e27
BASE_DENSITY_SI = 1*nc
w = BASE_DENSITY_SI * V_cell /6.
print(w)

0.001106854213233866 - which is below the value stated in your first text (Did you adjust cell sizes in between?) - However your minimal weighting is MIN_WEIGHTING = 0.0000001, thus macro-particles are created.

According to your init pipeline:

        using InitPipeline = bmpl::vector<
            CreateDensity< densityProfiles::XYTubes, startPosition::Random6ppc, Carbon >,
            Manipulate< manipulators::OnceIonized, Carbon >,
            ManipulateDerive< manipulators::binary::UnboundElectronsTimesWeighting, Carbon, Electrons >
        >;

you initialize neutral carbon atoms (according to the above discussion) and then ionize them once. The functor you are using afterwards with ManipulateDerive initializes electrons and adjusts the weighting (in your C^+ case, it uses the same weighting as for Carbon since you only have one unbound electron). (Furthermore I expected that you would define some ionization, but your Carbon species has no ionization method defined, thus it will never ionize further. - Is this intended?)

Regarding your last point 1: Okay, we should assume that the penPMD interface is correct. With above input, the expected charge density of electrons is:

BASE_DENSITY_SI * const.elementary_charge * -1.0

*-2.2 10^9 C/m^3** which agrees with your output.

Regarding your last point 2: Thank you for the files. I am a bit confused by

killing particles as they exist the simulation domain

did you mean exit? If yes, the weighting has nothing to do with particles leaving the simulation box (either in absorbing or periodic boundary conditions). It is just a number describing how many real particles (e.g. electrons) a macro-particle represents. If in your 6ppc case, the weighting would be below the minimal weighting, less than 6 particles are initialized in a cell until the minimal weighting requirement is fulfilled again. (Or if even the weighting of a single macro-particle in a cell is below the minimal weighting, no macro-particle is initialized.)

Regarding your last point 3: Okay, if you have not included a species.param yourself, the default one is used, which uses a TSC shape.

Regarding the following 1st paragraph: I think you are in a troublesome situation. You need high resolution for resolving your X-ray laser and your fields, but on the other hand, this causes you to have a weighting w << 1, which is questionable for PIC codes.

Regarding the following 2nd paragraph: Yes, your numbers agree - I did not see it while answering your last comment - thus my lengthy derivation above - please excuse that.

Regarding the following 3rd paragraph: Yes, as mentioned above, you should at least have a weighting w=1 - but this becomes problematic with your other resolution requirements. Perhaps you should initialize less than 1ppc. You could explicitly define, where the cells of w=1 particles should be. But this will be prone to sampling noise. (A molecular dynamics simulations would be probably better suited for this inter-atomic interaction).

Regarding the following 4th paragraph: I have never seen people using a larger cell size in laser propagation direction. In the n < n_ccause, it is commonly the propagation direction that needs higher resolution than the transversal directions because lambda_laser << lambda_pe. But if your X-ray laser only propagates in 1 direction - you could reduce the resolution in the other two (transversal) directions and thus end up with larger (non-cubic) cells that fulfill w=1.

cbontoiu commented 4 years ago

Great answer!

In conclusion the openPMD viewer shows electrons density as C/m^3. I expected this in particles/m^3 and this is where the confusion started from.

Indeed I wanted to disable ionization and have only C+ ions and electrons.

I am trying to initialize less particles per cells. Accidentally, with RandomParameter6ppc there were 6 so far. Now, with

struct RandomParameter1ppc 
{ 
static constexpr uint32_t numParticlesPerCell = 1u;
};
using Random1ppc = RandomImpl< RandomParameter1ppc >;

and

constexpr uint32_t TYPICAL_PARTICLES_PER_CELL = 
startPosition::RandomParameter1ppc::numParticlesPerCell;

in the particle.param file, will I have 1 electron and 1 carbon ion for each cell, so 2 particles if weighting is 1? If this is true, should I consider the total number of particles as 2*Ncells*cellVolume*BASE_DENSITY if weighting is 1?

But will weighting be w = BASE_DENSITY_SI * V_cell /1 or w = 2*BASE_DENSITY_SI * V_cell /1?

How can one initialize less than 1ppc? It would require a replacement for static constexpr uint32_t numParticlesPerCell = 1u;

Suppose I am forced to work with a weighting of 0.1 or so. This is less than 1 but not much less. Is there a way that I can estimate the penalty of not working with 1? Will a different particle shape order help here, or what else can be done in terms of algorithm development?

PrometheusPi commented 4 years ago

@cbontoiu You are welcome.

May I ask why you can assume Carbon to be only ionized once beforehand and can ignore any further ionization? (The ionization energy of the "last" 4 electrons is quite similar (from 11eV to 64eV) - nothing that is easily ionized by tunneling or classically when a high power laser is present.)

Setting the number of particles per cell to 1 is a good start. This should also dramatically speed up your simulation and require less memory. Yes, with that setup you will have 1 Carbon-macro-particle and 1 electron-macro-particle per cell initialized (or zero if the density is zero i that cell). However, if you keep your cell resolution, the weighting of each of these macro particles will only be 6x larger than in the previous setup, thus w=0.00664.

Sorry, I do not understand

If this is true, should I consider the total number of particles as 2NcellscellVolume*BASE_DENSITY if weighting is 1?

The weighting is (if the resolution is still the same) not 1, thus the condition is not true. The total number of macro-particles is much below 2*Ncells because you only inilaze macro particles where your density in non-zero. The total number of real electrons/carbon-ions is however more complicated. Assuming all macro particles have the same weighting w, the number of real electrons/Carbon-ions in the simulated volume is the number of macro particles (of that specific species) times their (average) weighting.

But will weighting be w = BASE_DENSITY_SI V_cell /1 or w = 2BASE_DENSITY_SI * V_cell /1?

The weighting of electron-macro-particles and of carbon-macro-particles will be (for C+) be the same and equal to: w = BASE_DENSITY_SI * V_cell /1.

How can one initialize less than 1ppc? It would require a replacement for static constexpr uint32_t numParticlesPerCell = 1u;

You could either manipulate your density profile to have discrete "blobs" of high density. Or you add a manipulator that sparses your macro-particle distribution. e.g. only every 8th cell: add to particles.param

struct FunctorSparse
{
  template< typename T_Particle >
  HDINLINE void operator()(
                           DataSpace< simDim > const & particleOffsetToTotalOrigin,
                           T_Particle & particle
                           )
  {
    const int keep_particle = ( (int(particleOffsetToTotalOrigin.x()) % 2) *
                                (int(particleOffsetToTotalOrigin.y()) % 2) *
                                (int(particleOffsetToTotalOrigin.z()) % 2) ) ;

    particle[ weighting_ ] *= keep_particle * 8.0;

    if( particle[ weighting_ ] == 0.0 )
    {
        particle[ multiMask_ ] = 0;
    }

  }
  static constexpr char const * name = "makeMacroParticleDistributionSparse";
};

  using Sparse = unary::FreeTotalCellOffset<
   FunctorSparse
    >;

adjust speciesInitalization.param

using InitPipeline = bmpl::vector<
    CreateDensity< densityProfiles::XYTubes, startPosition::Random6ppc, Carbon >, // as before
    Manipulate< manipulators::Sparse, Carbon >, // 7/8 particles w*=0, 1/8 particle w*=8 
    FillAllGaps< Carbon >, // remove w==0 particles from particle frame
    Manipulate< manipulators::OnceIonized, Carbon >, // ionize remaining Carbon as before
    ManipulateDerive< manipulators::binary::UnboundElectronsTimesWeighting, Carbon, Electrons > // derive elctrons from remaining carbon as before (they also are sparse now)
>;

Suppose I am forced to work with a weighting of 0.1 or so. This is less than 1 but not much less. Is there a way that I can estimate the penalty of not working with 1? Will a different particle shape order help here, or what else can be done in terms of algorithm development?

That is actually a question I wanted to ask you. The macro-particle method in particle-in-cell simulations are just sample points of the distribution function f(r,v,t) of the Vlasov equation, which have a delta-distribution in momentum and a finite spatial distribution in order to simplify the code by preventing macro-particles to spread out in space. However, at your spatial scale, f(...) would be actually not spatially homogeneous as assumed by most plasma simulations where the number of real particles in a cell is much higher and an averaged homogeneous density can be assumed, but you are at a level where f(..) should be a sum of delta distributed particle positions in the 6D phase space. However, this would also be a classical picture (Vlasov is classical). With quantum mechanics in mind, there is no fixed (delta-distribution like) momentum and position of all particles. Thus a w<1 more or less represents a sampling of all the possible electron/Carbon distribution probability <ψ|ψ> but by treating the distribution classically. I am not sure, whether such a treatment neglects some relevant purely quantum mechanical effects (it will definitely neglect quantum mechanical effect, the question is, whether they are relevant). I know, that particle dynamics at such small scales are commonly simulated with molecular dynamics codes assuming discrete particle positions (w=1) but that they also have to perform numerous simulations in order to make a statistical prediction. The question for me (to you) is now, whether you are using PIC codes to emulate such a statistical distribution with a single (w<1) simulation? How can you ensure that the PIC classical statistical treatment converges to your (quantum mechanical) probability distribution? And why does your density distribution can be assumed step like? (Sorry, I have no idea on the electron distribution function <ψ|ψ> of carbon-nanotubes - I only know single atoms distributions and there the distribution probability reduced with increasing distance to the nuclei.)

cc'ing @ax3l for his opinion on w<1 simulations.

cbontoiu commented 4 years ago

Thanks a lot for your detailed answer.

The main idea for not using ionization is that every single 6th carbon electron is shared in a conduction band like in solids and must be generated beforehand. They must be there, ready to move.

The reason for using PIConGPU is that I don't need to work at quantum level (though I will have a look to molecular dynamics codes). For a density of ~13e27 electrons/m^3 the plasma wavelength is ~287 nm and my idea of ensuring the PIC classical statistical treatment as valid is based on the fact that the carbon nanotubes can be as thick as 30 nm which would allow me to increase the cell size and converge to weight = 1.

Overall, the plan is to:

  1. enable ionization (only BSIEffectiveZ and ADKLinPol because collisions do no occur in reality) in order to find at which intensity can a linearly polarized laser of ~287 nm ionize carbon atoms;
  2. disable ionization but generate free electrons instead (one for each carbon ion) and study the axial wake fields using the laser parameters set as before.
  3. introduce an electron test bunch in these wake fields and study its dynamics. Here, any help would be highly appreciated because I have no idea how to that and I opened a new issue (#3169) for it.

Thank you!

ax3l commented 4 years ago

Since I was pinged here, I just quickly jump in a few points without reading the whole thread.

opinion on w<1 simulations.

no real problem herein, just has to be taken care of when modelling actual fundamental (quantized) processes. I elaborate a bit on weights and their meaning in section 2.3.2 of my thesis: https://doi.org/10.5281/zenodo.3266820

In conclusion the openPMD viewer shows electrons density as C/m^3. I expected this in particles/m^3 and this is where the confusion started from.

openPMD-viewer should show densities, such as e_Density, as particles/m^3. But I think you are generating charge densities:

e_ChargeDensity

with is C/m^3. You can also generate particle densities, the options are set in param/fileOutput.param (and the .cfg file if you sub-select fields further, e.g. for HDF5 or ADIOS).