SciML / NeuralPDE.jl

Physics-Informed Neural Networks (PINN) Solvers of (Partial) Differential Equations for Scientific Machine Learning (SciML) accelerated simulation
https://docs.sciml.ai/NeuralPDE/stable/
Other
964 stars 195 forks source link

Distance functions as a description of complex geometries #793

Open alexiscltrn opened 7 months ago

alexiscltrn commented 7 months ago

Hello SciML team !

I have been using NeuralPDE for a few months as part of my thesis. The challenge I am facing is the need to solve both inverse and direct PDE problems in various complex geometries. Following the approach outlined in this paper, I am currently utilizing distance functions to hardcode boundary conditions in PINNs and to sample collocation points exclusively within the valid domain of my physical model.

Concretely, I utilize symbolic_discretize to formulate data-free loss functions and manually merge a sampling strategy to exclude collocation points outside the geometry. Unfortunately, this manual process restricts the utilization of extra features in NeuralPDE, such as adaptive loss for training.

My feature request is to seamlessly integrate this methodology within NeuralPDE, allowing for a more streamlined workflow and full utilization of the tool's advanced features. I'm looking forward to receiving insights on the best approach for implementing this integration.

Here is a MWE of my current workaround:

Problem : Poiseuille flow in an annular section

PDE: $u(x,y)$ satisfies: $$\frac{\partial^2u(x,y)}{\partial x^2} + \frac{\partial^2u(x,y)}{\partial y^2} = -\frac{G}{\mu}$$

Geometry: In an annular pipe with inner radius $R_1$​ and outer radius $R_2$​, defining the cross-section.

Boundary Conditions: In polar coordinates $(r,\theta)$: $$u(R_1, \theta) = u(R_2, \theta) = 0$$ Here, the geometry is quite simple, and the problem could have been solved using a polar formulation. However, the workflow is designed to accommodate more complex cross-sectional shapes.

Additional Parameters:

using SciMLBase
using NeuralPDE, ModelingToolkit
using ChainRulesCore
using Lux, OptimizationOptimisers
using QuasiMonteCarlo
using Zygote
using ProgressMeter
using Statistics

using GLMakie
GLMakie.activate!()

using Random
rng = Random.default_rng()
Random.seed!(rng, 0)

#arbitrary constants
G = 50.0
μ = 3.0
R1 = 0.25
R2 = 1.0

# ----- #
# Model #
# ----- #

@parameters x y
@variables u(..)
Dxx = Differential(x)^2
Dyy = Differential(y)^2

# 2D Poisson equation
eq = Dxx(u(x, y)) + Dyy(u(x, y)) ~ -G/μ

# Boundary conditions
# Empty because it will be hard encoded in the neural network
# Currently, I think NeuralPDE.jl does't fully support enmpty boundary conditions
bcs = []

# Domain
domains = [x ∈ (-1, 1), y ∈ (-1, 1)]

@named pde_system = PDESystem(eq, bcs, domains, [x, y], [u(x,y)])

# -------------- #
# Neural Network #
# -------------- #

# equivalent functions defining the distance to the boundary
# Positive inside the domain, negative outside
ϕ1(x, y) = sqrt(x^2 + y^2) - R1
ϕ2(x, y) = R2 - sqrt(x^2 + y^2)
ϕ(x, y) = ϕ1(x, y) * ϕ2(x, y)

# for more funny shapes, you can use
# ϕ1(x, y) = sqrt(x^2 + y^2) - R1 + R1/4*cos(4*atan(y, x))

function ϕ_(pts)
    ChainRulesCore.ignore_derivatives() do
        mapslices(pts, dims=1) do (x, y) #unpack point
            ϕ(x, y)
        end
    end
end

chain_u = SkipConnection(
    Chain(Dense(2, 32, Lux.tanh_fast), Dense(32, 32, Lux.tanh_fast), Dense(32, 32, Lux.tanh_fast), Dense(32, 1)),
    (chain_output, chain_input) -> ϕ_(chain_input) .* chain_output
)

pinn = PhysicsInformedNN(chain_u, QuasiRandomTraining(500))

# ---------- #
# Discretize #
# ---------- #
sym_prob = symbolic_discretize(pde_system, pinn)

# Sampling strategy including the distance to the boundary
function sample_ϕ(n)
    # Full domain sampling
    pts = QuasiMonteCarlo.sample(n, [-1, -1], [1, 1], LatinHypercubeSample())
    # Filter points outside the domain
    return pts[:, vec(ϕ_(pts)) .> 0]
end

#Merge sampling strategy with the data-free PDE loss functions
function pinn_loss(u, p)
    pts = ChainRulesCore.@ignore_derivatives sample_ϕ(sym_prob.strategy.points)
    sum(map(l_ -> mean(abs2, l_(pts, u)), sym_prob.loss_functions.datafree_pde_loss_functions))
end

# -------- #
# Training #
# -------- #

k=0
max_iter_ADAM = 10000
pbar = Progress(max_iter_ADAM, desc="Training : ", dt=1.0)
callback = function (p, l)
    global k
    k += 1
    next!(pbar; showvalues=[(:iter, k), (:loss, l)])
    return false
end

f_ = OptimizationFunction(pinn_loss, Optimization.AutoZygote())
pinn_prob = OptimizationProblem(f_, sym_prob.flat_init_params)
pinn_res = Optimization.solve(pinn_prob, OptimizationOptimisers.Adam(); callback=callback, maxiters=max_iter_ADAM)
println("Optimization success : ", (SciMLBase.successful_retcode(pinn_res)))

# --------------------- #
# Analysis and plotting #
# --------------------- #

#analytical solution for an annular domain
function u_analytical(x, y)
    r = sqrt(x^2 + y^2)
    G/4μ*(R1^2-r^2) + G/4μ*(R2^2-R1^2)*log(r/R1)/log(R2/R1)
end

pred = map(Iterators.product(range(-1,1,200),range(-1,1,200))) do (x, y)
    # NaN to clearly see the outside of the domain in the plot
    ϕ(x,y) >= 0 ? first(sym_prob.phi([x, y], pinn_res.u)) : NaN
end

pred_error = map(Iterators.product(range(-1,1,200),range(-1,1,200))) do (x, y)
    ϕ(x,y) >= 0 ? abs2(first(sym_prob.phi([x, y], pinn_res.u)) - u_analytical(x, y)) : NaN
end

fig, ax, cont = contourf(range(-1,1,200), range(-1,1,200), pred)
Colorbar(fig[1,2], cont)
cont = contourf!(Makie.Axis(fig[2,1]), range(-1,1,200), range(-1,1,200), pred_error)
Colorbar(fig[2,2], cont)

fig

ChrisRackauckas commented 7 months ago

Yeah we just don't support this yet. It can be done by representing this on a square manually, i.e. it's just a 2D rectangular grid with periodic BCs on the left and right. This is something we want to get to soon though.