sslattery / Chimera

Other
1 stars 1 forks source link

Reduced Domain Approximation Analysis #31

Open sslattery opened 11 years ago

sslattery commented 11 years ago

I have developed what I'm calling the "Reduced Domain Approximation" as a mechanism for overcoming the dense composite matrices that result from effective preconditioning. This should hopefully speed up calculations and enable domain decomposed parallelism. The basic principles are:

  1. Build a composite operator with a quality preconditioner for the problem (i.e. ILUT).
  2. The composite operator will generally have many non-zero entries within floating point tolerance of zero - these are filitered with a tolerance.
  3. A very dense operator with 1000's of entries per row remains - keep the largest entries by absolute value by using a fill value parameter.
  4. Add the specified fraction of the weight lost during filtering from tolerance and fill level.

The net effect here is a reducing the size of the composite operator locally and therefore the size of the Monte Carlo domain while maintaining convergence. As we modify each row of the system this way, we reduce the number of entries and the overall weight of transitions in that row. Reducing this weight causes faster convergence. We can recover this weight to maintain the same weight per transition in that row as before the filtering and decrease the number of iterations required to converge and improve robustness at the cost of longer Markov chains due to the higher weights.

From an acceleration perspective this also makes sense as we are effectively using a reduced order model of the linear system in the Monte Carlo residual solve, giving an approximation for the system that still yields an approximate correction for convergence. Although the above is purely algebraic, it has links to reduced order physics modeling where a simpler representation of the model problem is solved to improve the solution of a more detailed and difficult to solve problem.

The purpose of this task is to assess the performance of the reduced domain approximation with respect to the parameters of filter tolerance, fill level, and weight recovery fraction.

sslattery commented 11 years ago

Using this approximation, I've observed convergence of ILUT preconditioned MCSA with 21 entries per row in the composite SPN operator, only 3 times as many as the original SPN operator, a big improvement over 2000 entries per row. This makes sense because if ILUT parameters were used to get a pretty good factorization, then those domain entries remaining after doing the approximation will therefore be the largest and have the greatest contribution to the Neumann series, effectively capturing most of the important pieces of the problem.

sslattery commented 11 years ago

I've tested the reduced domain approximation with ParaSails preconditioning for SP1, 1-group problem 3. Here are the results with full weight recovery:

rda_wr_1 We note first that we can't reduce the fill level as low as we could with the ILUT preconditioning. ParaSails doesn't do as good of a job preconditioning in the first place so throwing away enough domain entries ultimately destroys convergence. The good news here is reducing the domain doesn't particularly reduce iterative performance when the method converges with an ideal fill level of 200 entries per row.

Now the weight recovery is set to 0:

rda_wr_0 This helps iterative performance a tremendous amount and now we can converge with a significantly reduced domain (at the cost of iterative performance). This is different than the initial ILUT observations where here zero weight recovery aids robustness. My thinking here is that the reduced domain from a more poorly conditioned problem (with ParaSails in this case), should not be pushed as hard in the Monte Carlo and hence reducing the size of the contributions to the estimator by reducing weights gives a similar effect to using a Neumann relaxation parameter of less than 1.