SciML / NeuralPDE.jl

Physics-Informed Neural Networks (PINN) Solvers of (Partial) Differential Equations for Scientific Machine Learning (SciML) accelerated simulation
https://docs.sciml.ai/NeuralPDE/stable/
Other
974 stars 195 forks source link

Pushing to GPU with mixed int/float types in boundary conditions errors #697

Open AlexRobson opened 1 year ago

AlexRobson commented 1 year ago

Minor thing, but this was playing around with the two sets of introductory code:

1) https://github.com/SciML/NeuralPDE.jl/blob/master/README.md#example-solving-2d-poisson-equation-via-physics-informed-neural-networks 2) https://docs.sciml.ai/NeuralPDE/stable/tutorials/gpu/

Essentially, naively extending the example [1] in the readme with the push to the gpu errors in CUDA unless the bcs are reexpressed. Initially confusing as I thought it was something up with my CUDA set-up.

ERROR: CuArray only supports element types that are allocated inline.
Real is not allocated inline
# bcs = [u(0.0, y) ~ 0.0, u(1.0, y) ~ 0.0,
#     u(x, 0.0) ~ 0.0, u(x, 1.0) ~ 0.0]

This code should reproduce the error:

using NeuralPDE, Lux, ModelingToolkit, Optimization, OptimizationOptimisers
import ModelingToolkit: Interval, infimum, supremum
using Random, ComponentArrays

@parameters x y
@variables u(..)
Dxx = Differential(x)^2
Dyy = Differential(y)^2

# 2D PDE
eq = Dxx(u(x, y)) + Dyy(u(x, y)) ~ -sin(pi * x) * sin(pi * y)

# Boundary conditions ### Mix of int and floats
bcs = [u(0, y) ~ 0.0, u(1, y) ~ 0,
    u(x, 0) ~ 0.0, u(x, 1) ~ 0]
# Space and time domains
domains = [x ∈ Interval(0.0, 1.0),
    y ∈ Interval(0.0, 1.0)]
# Discretization
dx = 0.1

# Neural network
dim = 2 # number of dimensions
chain = Lux.Chain(Dense(dim, 16, Lux.σ), Dense(16, 16, Lux.σ), Dense(16, 1))

ps = Lux.setup(Random.default_rng(), chain)[1] ### push to gpu
ps = ps |> ComponentArray |> gpu .|> Float64
discretization = PhysicsInformedNN(chain, GridTraining(dx), init_params = ps)

@named pde_system = PDESystem(eq, bcs, domains, [x, y], [u(x, y)])
prob = discretize(pde_system, discretization) # Errors
xtalax commented 1 year ago

The examples should represent best practices, the docs should be updated

AlexRobson commented 1 year ago

FWIW i had a look at this. IIUC, when generating the training data product is used which doesn't promote, making CUDA unhappy:

span = [0, (0.0:0.1:1.0)];
map(points -> collect(points), Iterators.product(span...)) # 11-element Vector{Vector{Real}}:

Can a promotion be added here?

T = promote_type(eltype.(span)...)
map(points -> collect(T, points), Iterators.product(span...)) # 11-element Vector{Vector{Float64}}

The error emerges then with:

x = Vector{Float64}([1.0, 2.0, 3.0])
CUDA.adapt(CuArray, x) # Works

x = Vector{Real}([1.0, 2.0, 3.0])
CUDA.adapt(CuArray, x) # Fails

It looks like a couple of the tutorial examples would hit this, so plausibly easy enough to accidentally write, which is why I looked at it. That said, peering at it, I think this only appears in GridTraining which is discouraged anyway but idk if it appears elsewhere.