AMReX-Codes / amrex

AMReX: Software Framework for Block Structured AMR
https://amrex-codes.github.io/amrex
Other
540 stars 344 forks source link

Two issues about using cell center MLMG solver in the 3D channel flow case #516

Closed ruohai0925 closed 5 years ago

ruohai0925 commented 5 years ago

Hello all,

I have two issues about using cell center MLMG solver in the 3D channel flow case. Before listing these two issues, let me re-summarize the setup.

I have developed a code and am using MLMG solver to solve the cell centered pressure in a 3d channel flow case. The boundary conditions are:

Top: non-slip; Bottom: non-slip; Horizontal: periodic

Gravity force is ignored in this case. For quickly generating turbulence, the initial field is given by:

  do k = lo(3), hi(3)
     do j = lo(2), hi(2)
     do i = lo(1), hi(1)
        vel(i,j,k,1) = 0.d0
        vel(i,j,k,2) = 0.d0
        vel(i,j,k,3) = 0.d0
     end do
     end do
  end do

  do k3 = 1,7
  do k1 = 1,7
  call random_number(ran)
  phasex = 2.d0*pi*ran
  call random_number(ran)
  phasey = 2.d0*pi*ran
  call random_number(ran)
  phasez = 2.d0*pi*ran

  do k = lo(3), hi(3)
  z = xlo(3) + hz*(float(k-lo(3))+half)
     do j = lo(2), hi(2)
     y = xlo(2) + hy*(float(j-lo(2))+half)
     do i = lo(1), hi(1)
        x = xlo(1) + hx*(float(i-lo(1))+half)

        vel(i,j,k,1) = vel(i,j,k,1) + 0.01d0*(y-1.d0)**2.d0*20.d0
 &                                  * dcos(k1*x+k3*z+phasex)
        vel(i,j,k,2) = vel(i,j,k,2) + 0.01d0*(y-1.d0)**2.d0*20.d0
 &                                  * dsin(k1*x+k3*z+phasey)
        vel(i,j,k,3) = vel(i,j,k,3) + 0.01d0*(y-1.d0)**2.d0*20.d0
 &                                  * dcos(k1*x+k3*z+phasez)

     end do
     end do
  end do

  end do
  end do

Similar to IAMR, three solves exist at each advance step: Mac_Proj, Visc_solve and Projection for pressure. Here are the parameters. For Mac_Proj:

mac_tol             = 1.0e-4 # default 1.0e-11
mac_abs_tol      = 1.0e-5 # default 1.0e-12
max_iter            = 1000

For Projection for pressure:

proj.proj_tol           = 1.0e-11 # default 1e-12, control the MLMG solver
proj.proj_abs_tol       = 1.0e-12 # default 1e-16, control the MLMG solver
mlmg.setMaxIter(100);
mlmg.setMaxFmgIter(0);
mlmg.setVerbose(4);
mlmg.setCGVerbose(2);
mlmg.setBottomVerbose(4);

At the 1st step, the Mac_Proj converges very slowly. (1st isssue). That is why I set mac_tol as 1.0e-4 and mac_abs_tol as 1.0e-5, instead of 1.0e-11 and 1.0e-12. Otherwise, it can not converge within the 1000 steps. Any comments?

Here are the slow convergence outputs:

MLMG: # of AMR levels: 1
      # of MG levels on the coarsest AMR level: 4
MLMG: Initial rhs               = 637.3524679
MLMG: Initial residual (resid0) = 637.3524679
MLMG: Iteration   1 Fine resid/bnorm = 0.4898791842
MLMG: Iteration   2 Fine resid/bnorm = 0.3136508606
......
MLMG: Iteration 247 Fine resid/bnorm = 0.0001024827894
MLMG: Iteration 248 Fine resid/bnorm = 0.0001013112324
MLMG: Iteration 249 Fine resid/bnorm = 0.0001002065729
MLMG: Iteration 250 Fine resid/bnorm = 9.911665121e-05
MLMG: Final Iter. 250 resid, resid/bnorm = 0.06317224226, 9.911665121e-05
MLMG: Timers: Solve = 96.81575129 Iter = 96.71591889 Bottom = 0.133487579

Still at the 1st step, the Visc_solve works well but the solver blows up in the Projection part (2nd issue). I have checked the multifabs of acoef, bcoef, rhs before using the MLMG solver. No nan or inf values appear in these multifabs. The rhs value looks reasonable before solving. The mathematical form of boundary condition is periodic, Neumann and periodic. Here are the outputs:

In the MLABecLaplacian::compRHS2dlevel()
 MLABecLaplacian::buildMasks()
MLMG: Subtracting 1.675343711e-13 from rhs
MLMG: # of AMR levels: 1
      # of MG levels on the coarsest AMR level: 4
MLMG: Initial rhs               = 63355046.92
MLMG: Initial residual (resid0) = 63355046.92
MLCGSolver_BiCGStab: Initial error (error0) =        2129435.705
MLCGSolver_BiCGStab: Half Iter           1 rel. err. 2.553605483
MLCGSolver_BiCGStab: Iteration           1 rel. err. 0.7451352937
MLCGSolver_BiCGStab: Half Iter           2 rel. err. 0.7939863741
MLCGSolver_BiCGStab: Iteration           2 rel. err. 0.4504883597
......
MLCGSolver_BiCGStab: Half Iter          73 rel. err. 0.0001162072751
MLCGSolver_BiCGStab: Iteration          73 rel. err. 0.0001158028048
MLCGSolver_BiCGStab: Half Iter          74 rel. err. 0.0001105467486
MLCGSolver_BiCGStab: Iteration          74 rel. err. 0.0001093501413
MLCGSolver_BiCGStab: Half Iter          75 rel. err. 9.385865812e-05
MLCGSolver_BiCGStab: Final: Iteration   75 rel. err. 9.385865812e-05
MLMG: Iteration   1 Fine resid/bnorm = 1.39802473
......
MLMG: Iteration   2 Fine resid/bnorm = 3.839101706
......
MLMG: Iteration   3 Fine resid/bnorm = 17.95931696
......
MLMG: Iteration   4 Fine resid/bnorm = 82.0753655
......
MLMG: Iteration   5 Fine resid/bnorm = 387.4987785
......
MLMG: Iteration 100 Fine resid/bnorm = 2.705466705e+74
MLMG: Failed to converge after 100 iterations. resid, resid/bnorm = 1.7140497e+82, 2.705466705e+74

The codes were previously applied to simulate a complicated two-phase 3D rising bubble case, the MLMG solver works well in the bubble case. Not quite sure why it does not work in this one-phase 3D channel flow case.

Any suggestions are appreciated. I will respond to you asap if more information is needed to solve these issues.

Thanks and best.

Jordan

asalmgren commented 5 years ago

Jordan -- is this single-level or multiple-AMR-level?

What is your domain size and max_grid_size?

On Fri, Jul 12, 2019 at 9:30 AM Jordan Zeng notifications@github.com wrote:

Hello all,

I have two issues about using cell center MLMG solver in the 3D channel flow case. Before listing these two issues, let me re-summarize the setup.

I have developed a code and am using MLMG solver to solve the cell centered pressure in a 3d channel flow case. The boundary conditions are:

Top: non-slip; Bottom: non-slip; Horizontal: periodic

Gravity force is ignored in this case. For quickly generating turbulence, the initial field is given by:

do k = lo(3), hi(3) do j = lo(2), hi(2) do i = lo(1), hi(1) vel(i,j,k,1) = 0.d0 vel(i,j,k,2) = 0.d0 vel(i,j,k,3) = 0.d0 end do end do end do

do k3 = 1,7 do k1 = 1,7 call random_number(ran) phasex = 2.d0piran call random_number(ran) phasey = 2.d0piran call random_number(ran) phasez = 2.d0piran

do k = lo(3), hi(3) z = xlo(3) + hz(float(k-lo(3))+half) do j = lo(2), hi(2) y = xlo(2) + hy(float(j-lo(2))+half) do i = lo(1), hi(1) x = xlo(1) + hx*(float(i-lo(1))+half)

    vel(i,j,k,1) = vel(i,j,k,1) + 0.01d0*(y-1.d0)**2.d0*20.d0

& dcos(k1x+k3z+phasex) vel(i,j,k,2) = vel(i,j,k,2) + 0.01d0(y-1.d0)*2.d020.d0 & dsin(k1x+k3z+phasey) vel(i,j,k,3) = vel(i,j,k,3) + 0.01d0(y-1.d0)*2.d020.d0 & dcos(k1x+k3*z+phasez)

 end do
 end do

end do

end do end do

Similar to IAMR, three solves exist at each advance step: Mac_Proj, Visc_solve and Projection for pressure. Here are the parameters. For Mac_Proj:

mac_tol = 1.0e-4 # default 1.0e-11 mac_abs_tol = 1.0e-5 # default 1.0e-12 max_iter = 1000

For Projection for pressure:

proj.proj_tol = 1.0e-11 # default 1e-12, control the MLMG solver proj.proj_abs_tol = 1.0e-12 # default 1e-16, control the MLMG solver mlmg.setMaxIter(100); mlmg.setMaxFmgIter(0); mlmg.setVerbose(4); mlmg.setCGVerbose(2); mlmg.setBottomVerbose(4);

At the 1st step, the Mac_Proj converges very slowly. (1st isssue). That is why I set mac_tol as 1.0e-4 and mac_abs_tol as 1.0e-5, instead of 1.0e-11 and 1.0e-12. Otherwise, it can not converge within the 1000 steps. Any comments?

Here are the slow convergence outputs:

MLMG: # of AMR levels: 1

of MG levels on the coarsest AMR level: 4

MLMG: Initial rhs = 637.3524679 MLMG: Initial residual (resid0) = 637.3524679 MLMG: Iteration 1 Fine resid/bnorm = 0.4898791842 MLMG: Iteration 2 Fine resid/bnorm = 0.3136508606 ...... MLMG: Iteration 247 Fine resid/bnorm = 0.0001024827894 MLMG: Iteration 248 Fine resid/bnorm = 0.0001013112324 MLMG: Iteration 249 Fine resid/bnorm = 0.0001002065729 MLMG: Iteration 250 Fine resid/bnorm = 9.911665121e-05 MLMG: Final Iter. 250 resid, resid/bnorm = 0.06317224226, 9.911665121e-05 MLMG: Timers: Solve = 96.81575129 Iter = 96.71591889 Bottom = 0.133487579

Still at the 1st step, the Visc_solve works well but the solver blows up in the Projection part (2nd issue). I have checked the multifabs of acoef, bcoef, rhs before using the MLMG solver. No nan or inf values appear in these multifabs. The rhs value looks reasonable before solving. The mathematical form of boundary condition is periodic, Neumann and periodic. Here are the outputs:

In the MLABecLaplacian::compRHS2dlevel() MLABecLaplacian::buildMasks() MLMG: Subtracting 1.675343711e-13 from rhs MLMG: # of AMR levels: 1

of MG levels on the coarsest AMR level: 4

MLMG: Initial rhs = 63355046.92 MLMG: Initial residual (resid0) = 63355046.92 MLCGSolver_BiCGStab: Initial error (error0) = 2129435.705 MLCGSolver_BiCGStab: Half Iter 1 rel. err. 2.553605483 MLCGSolver_BiCGStab: Iteration 1 rel. err. 0.7451352937 MLCGSolver_BiCGStab: Half Iter 2 rel. err. 0.7939863741 MLCGSolver_BiCGStab: Iteration 2 rel. err. 0.4504883597 ...... MLCGSolver_BiCGStab: Half Iter 73 rel. err. 0.0001162072751 MLCGSolver_BiCGStab: Iteration 73 rel. err. 0.0001158028048 MLCGSolver_BiCGStab: Half Iter 74 rel. err. 0.0001105467486 MLCGSolver_BiCGStab: Iteration 74 rel. err. 0.0001093501413 MLCGSolver_BiCGStab: Half Iter 75 rel. err. 9.385865812e-05 MLCGSolver_BiCGStab: Final: Iteration 75 rel. err. 9.385865812e-05 MLMG: Iteration 1 Fine resid/bnorm = 1.39802473 ...... MLMG: Iteration 2 Fine resid/bnorm = 3.839101706 ...... MLMG: Iteration 3 Fine resid/bnorm = 17.95931696 ...... MLMG: Iteration 4 Fine resid/bnorm = 82.0753655 ...... MLMG: Iteration 5 Fine resid/bnorm = 387.4987785 ...... MLMG: Iteration 100 Fine resid/bnorm = 2.705466705e+74 MLMG: Failed to converge after 100 iterations. resid, resid/bnorm = 1.7140497e+82, 2.705466705e+74

The codes were previously applied to simulate a complicated two-phase 3D rising bubble case, the MLMG solver works well in the bubble case. Not quite sure why it does not work in this one-phase 3D channel flow case.

Any suggestions are appreciated. I will respond to you asap if more information is needed to solve these issues.

Thanks and best.

Jordan

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/AMReX-Codes/amrex/issues/516?email_source=notifications&email_token=ACRE6YSHVPOTAUOM6PC5AR3P7CWLLA5CNFSM4ICNBWPKYY3PNVWWK3TUL52HS4DFUVEXG43VMWVGG33NNVSW45C7NFSM4G65PR4A, or mute the thread https://github.com/notifications/unsubscribe-auth/ACRE6YWFA2B24DUHQFWJAIDP7CWLLANCNFSM4ICNBWPA .

ruohai0925 commented 5 years ago

Single-Level

Domain size: 12.56637 2.0 6.283185307 Grid size: 192 360 160 max_grid_size: 360 Number of cores: 2

asalmgren commented 5 years ago

so dx, dy and dz are very different it seems?

dx = .06544984375 dy = .00555555555555555555 dz = .03926990816875000000

Why did you choose those numbers of cells?

On Fri, Jul 12, 2019 at 10:51 AM Jordan Zeng notifications@github.com wrote:

Single-Level

Domain size: 12.56637 2.0 6.283185307 Grid size: 192 360 160 max_grid_size: 360 Number of cores: 2

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/AMReX-Codes/amrex/issues/516?email_source=notifications&email_token=ACRE6YVD4GFRJ6OMJFYZVS3P7DADPA5CNFSM4ICNBWPKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODZ2N4TI#issuecomment-510975565, or mute the thread https://github.com/notifications/unsubscribe-auth/ACRE6YSNHFY4ZZS3Q634PDLP7DADPANCNFSM4ICNBWPA .

ruohai0925 commented 5 years ago

This case is a direct numerical simulation (DNS) case. Finer grids are needed in the y-direction to resolve Kolmogorov scale (related to Reynolds number) near the surface in this 3D turbulent flow case. So I use 360 grids number in the y-direction. The grids number in x and z-direction are set based on previous case experience, not very large.

I see the problem: the ratio dx/dy and dz/dy is too large. But I am not sure if MLMG solver can be used to solve this large stretching grid problem. Or maybe I can reset the grid number and test this case.

Does max_grid_size influence the solver?

Thanks.

WeiqunZhang commented 5 years ago

MLMG could use Hypre's AMG instead of its own geometric multigrid. You should give it a try. See https://github.com/AMReX-Codes/amrex/issues/381.

ruohai0925 commented 5 years ago

Comments:

After many tests, the cell-centered GMG solver in the AMReX can deal with grid ratio no larger than 4.0. In the above case, dx/dy~12. So AMG might be needed.

ruohai0925 commented 5 years ago

I tried the AMG solver in HYPRE. and set the max_coarsening_level = 0;

but the mac projection still blows up: max_coarsening_level in the mac solve 0 use hypre in the mac solve MLMG: Initial rhs = 838.8898322 MLMG: Initial residual (resid0) = 838.8898322 MLMG: Failed to converge after 200 iterations. resid, resid/bnorm = 2.060094786e+179, 2.455739368e+176

Are there some suggestions? Should I change it to the PETSc solver?

Best.

ruohai0925 commented 5 years ago

@drummerdoc

I guess Mark might have some experiences about simulating the turbulent flow. Some suggestions about this issue?

The parameters are still the same as before: https://github.com/AMReX-Codes/amrex/issues/516#issuecomment-510975565

drummerdoc commented 5 years ago

Generally, I have never found useful results with anisotropic meshes, even when the problems are anisotropic. I would instead set up a case with nearly isotropic mesh spacings and count on the AMR to put grid where you need it. You'll have overkill in some regions, but I see you have only about a factor of 8-10 to deal with. I think that's probably manageable, at least to get an initial result. You can worry about optimizing the solution strategy later, after you have a fully functioning setup.

I would just resist picking problems in the algorithms that you'll like find not particularly interesting to solve....unless you like that sort of thing!