Closed cortner closed 7 years ago
MWE:
inner(f) = f(rand())
function test()
f = eval( :( r -> exp(-r) ) )
inner(f)
end
test()
It turns out this is actually a huge problem. A file containing the following script works:
using JuLIP, JuLIP.ASE, JuLIP.Potentials
calc = lennardjones(r0=JuLIP.ASE.rnn("Al"))
at = bulk("Al", cubic=true) * (20,20,2)
set_calculator!(at, calc)
set_constraint!(at, FixedCell(at))
gradient(at)
but the next one generates an error
using JuLIP, JuLIP.ASE, JuLIP.Potentials
function test()
calc = lennardjones(r0=JuLIP.ASE.rnn("Al"))
at = bulk("Al", cubic=true) * (20,20,2)
set_calculator!(at, calc)
set_constraint!(at, FixedCell(at))
gradient(at)
end
test()
The reasons seems to be that in the first case, after each function call it is safe to invalidate all previous code but in the second case, it is frozen until we exit test().
A temporary fix would be to wrap all functions generated with eval
using FunctionWrappers
, this comes with a ~ca 10%~ performance penalty; purely in terms of the function call overhead, the penalty seems actually very high. not sure it would really make a difference within the overheads of, say a SW potential.
solution due to @ettersi:
using Calculus: differentiate
type AnalyticPotential{F0,F1}
f::F0
f_d::F1
end
macro AnalyticPotential(expr)
esc(:(
AnalyticPotential(
r->$expr,
r->$(differentiate(expr,:r))
)
))
end
inner(p) = p.f(rand())
function test()
a = rand()
p = @AnalyticPotential exp(-a*r)
inner(p)
end
test()
Unfortunately, this won't work:
a = 1.0
p = @AnalyticPotential exp(a*r)
@show p.f(1.0) # 2.718281828459045
a = 2.0
@show p.f(1.0) # 7.38905609893065
Is this the problem that the let
blocks solve?
Here are the timings (forces
) for Wrapped vs unwrapped potentials:
Timing Lennard-Jones:
0.016561 seconds (19 allocations: 75.578 KiB)
0.017770 seconds (19 allocations: 75.578 KiB)
0.016917 seconds (19 allocations: 75.578 KiB)
Timing Lennard-Jones-Wrapped:
0.021734 seconds (19 allocations: 75.578 KiB)
0.023149 seconds (19 allocations: 75.578 KiB)
0.024302 seconds (19 allocations: 75.578 KiB)
Timing Stillinger-Weber:
0.007784 seconds (121.62 k allocations: 7.178 MiB)
0.019893 seconds (121.62 k allocations: 7.179 MiB, 57.78% gc time)
0.009960 seconds (121.62 k allocations: 7.178 MiB)
Timing Stillinger-Weber-Wrapped:
0.017970 seconds (121.62 k allocations: 7.179 MiB, 36.49% gc time)
0.011683 seconds (121.62 k allocations: 7.178 MiB)
0.010559 seconds (121.62 k allocations: 7.178 MiB)
There are two short-term options:
Workaround for the problem is to enforce dynamic dispatch at the call-site (rather than in the definition of the potential) using invokelatest
; see new benchmark gist and the discussion
I will discuss macros and generated functions with Simon to see whether this gives a cleaner solution, but if not then this just needs to be documented.
This seems like a fairly serious regression. Do I understand correctly that it only causes problems in REPL/notebooks when potentials are redefined?
I now looked at your benchmark timings, it seems like the invokelatest()
version has very little overhead so this fine?
I had just misunderstood invokelatest
. It actually has huge overhead, so if I were to apply it inside a performance-critical loop then it would be awful. But in the latest timings I am effectively using a type inference barrier function, i.e., outer function is interpreted, but inner function is compiled. Without invokelatest
this runs into this world age problem, while invokelatest
enforces dynamic dispatch which doesn't have this problem to begin with. Is my explanation making sense?
Just had a very useful discussion with Simon (he is teaching me to write macros :)). His suggestion is to replace generating analytic potentials via eval
with a macro. The usage would the be as follows:
a, r0 = params()
# Old
p = PairPotential( :( exp( - $a * (r/$r0 - 1.0) ) ), :r )
# New
p = @PairPotential r -> exp( - a * (r/r0 - 1.0) )
Apart from being more readable (I think) it completely circumvents the need for invokelatest
.
Maybe a big, maybe a small problem is the following:
a = 1.234; r0 = 0.9
p = @PairPotential r -> exp( - a * (r/r0 - 1.0) )
p(1.0) # 0.871873345933761
a = 2.0
p(1.0). # 0.800737402916808
To circumvent this, one needs to use let
blocks:
a = 1.234; r0 = 0.9
p = let a = a, r0 = r0
@PairPotential r -> exp( - a * (r/r0 - 1.0) )
end
p(1.0) # 0.871873345933761
a = 2.0
p(1.0). # 0.871873345933761
The way I see this, it could be a feature that both options are possible; e.g. it would allow to change the parameters in - say - the Morse potential after it has been constructed. This is currently not possible.
@jameskermode do you have a view on this?
I should also add that most likely this macro is the way analytical potential really should be implemented. (as in - this is how the language was intended to be used)
I agree it's much more readable using @PairPotential
form, especially the way it removes the need for expression quoting and $r
etc. I don't think the updating of parameters is too bad either, providing one is aware of it.
How does using macros interoperate with type inference? Can we still have a type hierarchy, with, e.g. a corresponding PairPotential
type?
The macro @PairPotential will generate an object p::PairPotential
where PairPotential
is the same object as before. So the type hierarchy would remain unchanged.
Great- I was worried there would be a clash between macro @PairPotential
and type PairPotential
but I guess this is fine. If it's not too much work to convert code to macro form I think this would be worthwhile.
if I really wanted to get ambitious then it would actually be possible to write a macro like this:
a = 1.234; r0 = 0.9
p = @PairPotential r -> exp( - a * (r/r0 - 1.0) ), let = a, r0
which would wrap this all into a let
block so the user doesn't have to.
ok - then I'll get going on this (not the ambitious version though!)
thank you for the input!
for future reference, I've put all the different variants into the gist
There is an interesting new "feature" in v0.6 : "In version 0.6, Julia now keeps track of all the callers of a given function (called backedges) so it can go back and invalidate the previous code with any updated methods."
See e.g.
The is explained in detail in Redefining Methods
Related from #68 : preconditioner matrix assembly in v0.6 crashes; cf. preconditioners.jl / testsolve.jl; I am now guessing that the problem has something to do with either the precomputation of the refactor and then redifinition; or with the @D, @DD macros. Will need to revisit them or get rid of them. Or quite possible there is a problem with the way the analytic potentials are defined. . .