CmPA / iPic3D

Particle-in-Cell code using the implicit moment method
72 stars 55 forks source link

suppress runaway particle instability #71

Open alecjohnson opened 10 years ago

alecjohnson commented 10 years ago
In iPic3D, pushing of fast-moving particles is unstable, which can lean to runaway particles, which in turn can cause the entire simulation to blow up. This instability results from a combination of two things:
  1. Fast particles are pushed with extrapolated field values when they cross MPI subdomain boundaries.
  2. Fast-moving particles produce noise in the moments, because current densities and other moments that should be spread out over the path traversed by a particle in a time step are instead concentrated at its position at the beginning of the time step.
As a consequence, we cannot expect to resolve power laws of high-energy particles accurately with the existing mover.

This problem could be addressed in the following three ways:

  1. Cap particle velocities when communicating particles.
  2. Do not use extrapolated field values in the mover; rather, take field values from the closest point in the "guarded" subdomain (the domain including the ghost cells). In my mover_PC_AoS() method this could be accomplished by modifying the definition of get_safe_cell_and_weights(), replacing code that maps cell values into the guarded grid such as
        if (cx < 0) cx = 0;
        if (cy < 0) cy = 0;
        if (cz < 0) cz = 0;
        if (cx > cxlast) cx = cxlast; //nxc-1;
        if (cy > cylast) cy = cylast; //nyc-1;
        if (cz > czlast) cz = czlast; //nzc-1;
    
    with code that maps position values (the ones used to compute the cell indices and node weights) into the guarded subdomain:
        // here cx_pos references position in cell coordinates
        // and epsilon denotes machine epsilon.
        if (cx_pos < 0) cx_pos = epsilon;
        if (cy_pos < 0) cy_pos = epsilon;
        if (cz_pos < 0) cz_pos = epsilon;
        if (cx_pos >= nxc) cx_pos = nxc_minus_epsilon;
        if (cy_pos >= nyc) cy_pos = nyc_minus_epsilon;
        if (cz_pos >= nzc) cz_pos = nzc_minus_epsilon;
    
        cx = int(floor(cx_pos));
        cy = int(floor(cy_pos));
        cz = int(floor(cz_pos)); 
        assert(cx >= 0);
        assert(cy >= 0);
        assert(cz >= 0);
        assert(cx < nxc);
        assert(cy < nyc);
        assert(cz < nzc);      
    
  3. Subcycle fast particles. This is imperative if we wish to accurately resolve the physics of high-energy particles and get accurate power laws. This will also allow the algorithm to be independent of the MPI subdomain partitioning. However, implementing subcycling requires that we separate out partially moved particles when communicating them in the mover, and the checks in the subcycled mover involve some delicate logic and some additional computational expense. Therefore, a combination of the previous two alternatives provides a quick fix until this more satisfactory fix is implemented.

Why fast-moving particles is unstable in the current mover.

When a particle moving more than about one mesh cell per cycle crosses from one MPI subdomain to another, it is moved using bilinearly extrapolated field values to avoid the need for particle communication between iterations of the implicit particle mover. This is unstable. The velocity of a sufficiently fast particle is almost certain to blow up, resulting in it becoming a runaway particle.

The presence of even a small number of runaway particles will tend to cause the entire simulation to blow up. To understand why, consider how a fast particle contributes to the computed moments. Rather than being spread over the path of motion of the particle, the computed contribution of the current density of a fast particle is concentrated at the sequence of its positions at consecutive time steps (a "stroboscope" effect). The presence of fast-moving particles thus creates bumps in the computed moments, destroying their smoothness. Lack of smoothness in the computed moments leads to lack of smoothness in the computed fields, leading to greater wildness in the extrapolated field values used to push fast particles crossing MPI subdomain boundaries, leading to further blow-up of the velocities of fast particles.

The algorithm resolves well the physics of particles moving less than one mesh cell per time step, but resolves the physics of a particle moving more than one mesh cell per time step very poorly, for the same type of reason that integrating a particle shape function by summing equally-spaced samples is is accurate if the sample points resolve the width of the shape function and is unpredictable and inaccurate otherwise. Summing equally spaced samples is a spectrally accurate integration rule, which for a bandwidth-limited periodic function becomes spectacularly accurate as the sampling interval finely resolves the highest frequency and becomes spectacularly inaccurate as the sampling interval coarsely underresolves the highest frequency.

The result is that the existence of even a few run-away particles will eventually cause the entire simulation to blow up.

volshevsky commented 10 years ago

Alec, check the new relativistic and non-relativistic movers in the SVN.

alecjohnson commented 10 years ago

I looked at mover_PC() in the svn code. As far as I can see, it makes no attempt at all to guard array access! So in the 2nd and 3rd iterations of the mover, when you access the electric field, you could be accessing memory that is completely outside the array. (Of course, most of the time, you will be accessing "random" data inside the array...)

In detail:

mover_PC is located in

particles/Particles3D.cpp

/* mover with a Predictor-Corrector scheme _/ int Particles3D::mover_PC(Grid * grid, VirtualTopology3D * vct, Field * EMf) { double dto2 = .5_dt, qomdt2 = qom_dto2/c, omdtsq, denom, ut, vt, wt, udotb; double Exl=0.0, Eyl=0.0, Ezl=0.0, Bxl=0.0, Byl=0.0, Bzl=0.0; double inv_dx = 1.0/dx, inv_dy = 1.0/dy, invdz = 1.0/dz; int ix,iy,iz; double xptilde, yptilde, zptilde, uptilde, vptilde, wptilde; double weight[2][2][2]; //double* weight = newArr3(double,2,2,2); // move each particle with new fields for (int i=0; i < nop; i++) { xptilde = x[i]; yptilde = y[i]; zptilde = z[i]; // calculate the average velocity iteratively for(int innter=0; innter < 3; innter++) { // interpolation G-->P // EAJ note: in the 2nd and 3rd iterations ix,iy,iz could lie // arbitrarily far outside the array: ix = 2 + int((x[i]-xstart)_inv_dx); iy = 2 + int((y[i]-ystart)_inv_dy); iz = 2 + int((z[i]-zstart)_invdz); // EAJ note: this call does nothing to correct the problem calculateWeights(weight,x[i],y[i],z[i],ix,iy,iz,grid); Exl=0.0, Eyl = 0.0, Ezl = 0.0, Bxl = 0.0, Byl = 0.0, Bzl = 0.0; for (int ii=0; ii < 2; ii++) for (int jj=0; jj < 2; jj++) for (int kk=0; kk < 2; kk++) { // EAJ note: arguments to getEx etc. are thus completely unconstrained... Exl += weight[ii][jj][kk](EMf->getEx(ix - ii,iy -jj,iz- kk )); Eyl += weight[ii][jj][kk](EMf->getEy(ix - ii,iy -jj,iz- kk )); Ezl += weight[ii][jj][kk](EMf->getEz(ix - ii,iy -jj,iz -kk )); Bxl += weight[ii][jj][kk](EMf->getBx(ix - ii,iy -jj,iz -kk )); Byl += weight[ii][jj][kk](EMf->getBy(ix - ii,iy -jj,iz -kk )); Bzl += weight[ii][jj][kk]_(EMf->getBz(ix - ii,iy -jj,iz -kk )); [...] } // update position x[i] = xptilde + uptilde_dto2; y[i] = yptilde + vptilde_dto2; z[i] = zptilde + wptilde_dto2; } // end of iteration // update the final position and velocity u[i]= 2.0_uptilde - u[i]; v[i]= 2.0_vptilde - v[i]; w[i]= 2.0_wptilde - w[i]; x[i] = xptilde + uptilde_dt; y[i] = yptilde + vptilde_dt; z[i] = zptilde + wptilde_dt; } // end of particles }

Alec

On Sat, Jun 07, 2014 at 06:07:27AM -0700, SyA wrote:

Alec, check the new relativistic and non-relativistic movers in the SVN.

— Reply to this email directly or view it on GitHub.*

xiaowang119 commented 2 years ago

Hi, Have this problem been solved in master branch? Any suggestion?