LupoLab / Luna.jl

Nonlinear optical pulse propagator
MIT License
66 stars 27 forks source link

NaN in propagation #193

Open jtravs opened 3 years ago

jtravs commented 3 years ago

Granted, these parameters are rather ridiculous (see core size), but they arose during an optimisation routine and we should probably handle this better. In fact, it ought to complete succesfully (essentially we shoudl just loose all energy).

using Luna

gas = :HeJ
presin = 0.753
presout = 0.0
ain = 1e-6
aout = 1e-6
λ0 = 800e-9
τfwhm = 10e-15
energy = 400e-6
L = 2.34
λspan=(80e-9, 2000e-9)
τwindow=1e-12
loss=true
ionisation=true

grid = Grid.RealGrid(L, λ0, λspan, τwindow)
afun = let a0=ain, aL=aout, L=L
    afun(z) = a0 + (aL-a0)*z/L
end
coren,  densityfun = Capillary.gradient(gas, L, presin, presout)
m = Capillary.MarcatilliMode(afun, coren, loss=loss)
aeff(z) = Modes.Aeff(m, z=z)
responses = (Nonlinear.Kerr_field(PhysData.γ3_gas(gas)),)
if ionisation
    ionpot = PhysData.ionisation_potential(gas)
    ionrate = Ionisation.ionrate_fun!_ADK(ionpot)
    responses = (responses...,
    Nonlinear.PlasmaCumtrapz(grid.to, grid.to, ionrate, ionpot))
end
linop, βfun! = LinearOps.make_linop(grid, m, λ0)
inputs = Fields.GaussField(λ0=λ0, τfwhm=τfwhm, energy=energy)
Eω, transform, FT = Luna.setup(grid, densityfun, responses, inputs, βfun!, aeff)
output = Output.MemoryOutput(0, grid.zmax, 2)
Luna.run(Eω, grid, linop, transform, FT, output)

This propagates very slowly, but seems to be working. But then it errors withs:

...
[ Info: Progress: 0.09 %, ETA: XX:XX:XX, stepsize 4.73e-06, err 0.26, repeated 5
[ Info: Progress: 0.09 %, ETA: XX:XX:XX, stepsize 6.07e-06, err 0.08, repeated 5
[ Info: Progress: 0.09 %, ETA: XX:XX:XX, stepsize 1.01e-05, err 0.02, repeated 5
[ Info: Progress: 0.10 %, ETA: XX:XX:XX, stepsize 3.23e-05, err 0.02, repeated 5
[ Info: Progress: 0.17 %, ETA: 06:40:59, stepsize 4.58e-04, err 0.03, repeated 5
[ Info: Progress: 1.93 %, ETA: 06:47:29, stepsize 1.11e-01, err 148225.47, repeated 7
[ Info: Progress: 3.03 %, ETA: 04:16:56, stepsize 5.85e-01, err NaN, repeated 13
[ Info: Progress: 3.65 %, ETA: 03:32:04, stepsize 1.46e-01, err 148225.47, repeated 19
[ Info: Progress: 4.90 %, ETA: 02:36:17, stepsize 1.17e+00, err NaN, repeated 24
[ Info: Progress: 5.53 %, ETA: 02:18:00, stepsize 2.92e-01, err NaN, repeated 30
ERROR: LoadError: InexactError: trunc(Int64, NaN)
Stacktrace:
 [1] trunc at .\float.jl:703 [inlined]
 [2] ceil at .\float.jl:365 [inlined]
 [3] ffast at C:\Users\John\.julia\dev\Luna\src\Maths.jl:617 [inlined]
 [4] (::Luna.Maths.CSpline{Float64,Float64,Array{Float64,1},Array{Float64,1},Luna.Maths.var"#ffast#29"{Float64,Float64,Int64}})(::Float64) at C:\Users\John\.julia\dev\Luna\src\Maths.jl:685
 [5] dens at C:\Users\John\.julia\dev\Luna\src\Capillary.jl:279 [inlined]

(the full error is included below).

I am concerned about the sudden increase in step-size in the last few steps, and that a NaN leaks into the printed value of the error. Looking at RK45.jl I would have thought that this was handled at https://github.com/LupoLab/Luna/blob/8964ab0544accb4eec13e50aa74aa5e907a93f0a/src/RK45.jl#L380

Before that, if we get a NaN in s.err from the solver, then this should evaluate to false: https://github.com/LupoLab/Luna/blob/8964ab0544accb4eec13e50aa74aa5e907a93f0a/src/RK45.jl#L178 becuase every comparison to NaN is defined to be false.

So I am confused what is happening here.

error.txt

jtravs commented 3 years ago

One thought: if the energy drops close to zero, so that the pulse change per step is estimated to be negligible, then the step size will dramatically increase, and hence the linear propagator might error becuase of a larger attenuation coefficient.

chrisbrahms commented 3 years ago

The reason the NaN error is printed is just that the printing doesn't check whether the last step is ok. So if a step fails with s.err being NaN, it can still be printed.

I think you're right about the energy being the problem--the field just disappears and the error estimation essentially divides by zero. I saw something similar at some point when I forgot to turn off the loss for a PCF simulation. Fundamentally I suppose the step size should go to infinity and the propagation finish instantly. Does it improve if you change the limit we apply toα in LinearOps?

chrisbrahms commented 3 years ago

Actually there are two potential issues aren't there: the error estimate may divide by zero and give nonsense answers (this would lead to stepsize collapse though?) or the stepsize gets too big and exp(α) is the problem.

jtravs commented 3 years ago

I think you're right about the energy being the problem--the field just disappears and the error estimation essentially divides by zero.

Isn't that handled by atol?

chrisbrahms commented 3 years ago

Isn't that handled by atol?

Yes you're right, it is.

Changing the limit on α in LinearOps.conj_clamp to 30 (from 3000) makes the simulation run to the finish. Setting a maximum stepsize, however, does not fix it, even if it's 100 μm. (I tested this while setting the energy to a few pJ; otherwise it takes forever.)

Options to solve this I can see:

  1. Make the limit in conj_clamp user-configurable. This is relatively quick but takes trial and error for each case.
  2. Move conj_clamp (or just the clamping part) from LinearOps to RK45. Then we could check in a place where the stepsize is available. This risks potentially silently changing the physics of the propagation, however.
  3. For this specific case: stricter bounds on your optimisation routine