usnistgov / fipy

FiPy is a Finite Volume PDE solver written in Python
http://pages.nist.gov/fipy/en/latest
Other
489 stars 148 forks source link

How could finer meshes end up with wrong answers? #1044

Closed CalebDmArcher closed 1 month ago

CalebDmArcher commented 1 month ago

I am wondering if there are some settings or considerations that I need to include when setting a proper mesh size for the model? Such as the lower and upper limit for the mesh size.

Problem description: Below is the Python code using Fipy and Gmsh and can run with different solvers. This code describes an object with an initial temperature equal to the ambient temperature and stays there with no external heat. The solution which shows the temperature map of this object in the steady state should always be equal to the ambient temperature.

However, if I change the mesh size, the results can be very different: If I change the mesh size dx (in the Gmsh code) to 5.e-3 or bigger, the final result equals the ambient value. If I change it to 1.e-3 or smaller, the answers suddenly become very small, such as 1.845, for all petsc solvers, while scipy LU & GMRES still can return the correct results. (I thought a smaller mesh size should usually end up with a more accurate solution but takes a longer time. I did not expect that it could end up with a very different solution.)

Python code:

import os

# Set environment variables for the solver
# os.environ["FIPY_SOLVERS"] = "scipy"
os.environ["FIPY_SOLVERS"] = "petsc"
print("FIPY_SOLVERS is set to:", os.environ["FIPY_SOLVERS"])

os.environ["OMP_NUM_THREADS"] = "1"
print("OMP_NUM_THREADS is set to:", os.environ["OMP_NUM_THREADS"])

# Solver definitions
import fipy as fp
# scipy LU
# solver = fp.solvers.scipy.linearLUSolver.LinearLUSolver(tolerance=1e-10, iterations=10)

# scipy GMRES
# solver = fp.solvers.scipy.linearGMRESSolver.LinearGMRESSolver(iterations=1000, tolerance=1e-10)

# petsc LU without precon
# solver = fp.solvers.petsc.linearLUSolver.LinearLUSolver(tolerance=1e-10, iterations=10, precon='lu')

# petsc GMRES with precon, jacobi or bjacobi
# solver = fp.solvers.petsc.linearGMRESSolver.LinearGMRESSolver(precon='jacobi', iterations=1000, tolerance=1e-10)

# petsc GMRES wihout precon
solver = fp.solvers.petsc.linearGMRESSolver.LinearGMRESSolver(iterations=1000, tolerance=1e-10)

print("The solver package being used is", fp.solvers.solver)

from fipy import CellVariable, Gmsh3D, DiffusionTerm, Viewer, FaceVariable, ImplicitSourceTerm
from fipy.tools import numerix
import numpy as np

# Set the backend for Matplotlib
import matplotlib
matplotlib.use('TkAgg')

import matplotlib.pyplot as plt
import time
from scipy.interpolate import griddata

# Define dimensions and mesh size
Lx = 5e-2
Ly = 2e-2
t_cu = 2e-2
t_fr4 = 2e-2
Lz = t_cu + t_cu + t_fr4
dx = 1.e-3

k_cu = 400.
k_fr4 = 0.5

nx = ny = nz = int(Lx / dx)

# Define the mesh
mesh = Gmsh3D("cube_27_test.geo")

# Parameters for the problem
h = 10.0  # Convective heat transfer coefficient (W/m^2·K)
T_ambient = 30.  # Ambient temperature (°C)
source_value = 0.
maskSurfaces = mesh.exteriorFaces

phi = CellVariable(name="solution variable", mesh=mesh, value=T_ambient)

xc, yc, zc = mesh.cellCenters
x, y, z = mesh.faceCenters

# Define the spatially varying diffusion coefficient
D = FaceVariable(mesh=mesh, value=k_cu)

# Define the source term
source = CellVariable(mesh=mesh, value=0.0)

# Define the equation with Robin boundary conditions
Gamma = D * ~maskSurfaces
dPf = FaceVariable(mesh=mesh, value=mesh._faceToCellDistanceRatio * mesh.cellDistanceVectors)
n = mesh.faceNormals
a = FaceVariable(mesh=mesh, value=h * n, rank=1)
b = D
g = FaceVariable(mesh=mesh, value=h * T_ambient, rank=0)
RobinCoeff = maskSurfaces * D * n / (dPf.dot(a) + b)

eq = (DiffusionTerm(coeff=Gamma) +
      source +
      (RobinCoeff * g).divergence - ImplicitSourceTerm(coeff=(RobinCoeff * a.dot(n)).divergence))

# Start the timer
print('Start of the equation solving process...')
start_time = time.process_time()
start_time_e = time.perf_counter()

# Solve the equation
eq.solve(var=phi)

# End the timer
end_time_e = time.perf_counter()
end_time = time.process_time()
elapsed_time = end_time_e - start_time_e
cpu_time = end_time - start_time
print('//////////END of equation solving process//////////')
print(f"CPU time: {cpu_time:.4f}s; elapsed time: {elapsed_time:.4f}s.")

if __name__ == '__main__':
    # Plot
    viewer = Viewer(vars=phi, datamin=0., datamax=40.)
    viewer.plot()

    # 2D slice at x = Lx/2
    mask_slice = abs(xc - Lx / 2) < dx / 2
    phi_slice = phi.value[mask_slice]
    phi_slice = np.round(phi_slice, 2)
    y_slice = yc[mask_slice]
    z_slice = zc[mask_slice]
    print(np.nanmin(phi_slice), np.nanmax(phi_slice))
    print(np.nanmin(phi.value), np.nanmax(phi.value))

    # Scatter plot
    plt.figure()
    plt.scatter(y_slice, z_slice, c=phi_slice, cmap='viridis', s=15)
    plt.colorbar(label='Phi Value')
    plt.xlabel('Y')
    plt.ylabel('Z')
    plt.title(f'Slice at x = {Lx / 2}')

    plt.show()

Gmsh code:

// Define cube dimensions
Lx = 5e-2;
Ly = 2e-2;
t_cu = 2e-2;
t_fr4 = 2e-2;
Lz = t_cu + t_cu + t_fr4;
dx = 1.e-3;

// Create the cube points
Point(1) = {0, 0, 0, dx};
Point(2) = {Lx, 0, 0, dx};
Point(3) = {Lx, Ly, 0, dx};
Point(4) = {0, Ly, 0, dx};
Point(5) = {0, 0, Lz, dx};
Point(6) = {Lx, 0, Lz, dx};
Point(7) = {Lx, Ly, Lz, dx};
Point(8) = {0, Ly, Lz, dx};

// Create lines for the cube
Line(1) = {1, 2};
Line(2) = {2, 3};
Line(3) = {3, 4};
Line(4) = {4, 1};
Line(5) = {5, 6};
Line(6) = {6, 7};
Line(7) = {7, 8};
Line(8) = {8, 5};
Line(9) = {1, 5};
Line(10) = {2, 6};
Line(11) = {3, 7};
Line(12) = {4, 8};

//Transfinite Line {1,2,3,4} = nx;
//Transfinite Line {5,6,7,8} = nx;
//Transfinite Line {9,10,11,12} = nz;

// Create curve loops for each face
Curve Loop(13) = {1, 2, 3, 4};
Curve Loop(14) = {5, 6, 7, 8};
Curve Loop(15) = {1, 10, -5, -9};
Curve Loop(16) = {2, 11, -6, -10};
Curve Loop(17) = {3, 12, -7, -11};
Curve Loop(18) = {4, 9, -8, -12};

// Create surfaces for each face
Plane Surface(19) = {13};
Plane Surface(20) = {14};
Plane Surface(21) = {15};
Plane Surface(22) = {16};
Plane Surface(23) = {17};
Plane Surface(24) = {18};

// Combine surfaces into a volume
Surface Loop(25) = {19, 20, 21, 22, 23, 24};
Volume(26) = {25};
CalebDmArcher commented 1 month ago

It turned out that the above problem was due to the solver settings. The PETSc developers suggested adding the code below and my problem is solved:

# Import petsc4py and set PETSc options
from petsc4py import PETSc
PETSc.Options().setValue('-ksp_monitor_true_residual', '')
PETSc.Options().setValue('-ksp_converged_reason', '')
# PETSc.Options().setValue('-pc_type', 'lu')
PETSc.Options().setValue('-pc_type', 'gamg')
guyer commented 1 month ago

I'm glad the 'gamg' preconditioner works for you, although you don't need all that. Just do

solver = fp.solvers.petsc.linearGMRESSolver.LinearGMRESSolver(iterations=1000, tolerance=1e-10, precon='gamg')

I think you earlier had a comment that it was doing 1000 iterations no matter what? That's because you create solver, but you never use it. Call:

eq.solve(var=phi, solver=solver)

As it is, you're just getting the default solver, which is GMRES with ILU preconditioning and 1000 iterations. You can override the defaults with direction petsc4py calls, but that's not the way FiPy was designed to be used.

guyer commented 1 month ago

I'd prefer you didn't delete comments and information that you gave earlier. It makes it really hard to follow and diagnose what's going on.

CalebDmArcher commented 1 month ago

I'd prefer you didn't delete comments and information that you gave earlier. It makes it really hard to follow and diagnose what's going on.

Thank you for the response. Do you have any ideas about my other issue (#1043) about getting the top and bottom surfaces by any chances?