JuliaDiff / ForwardDiff.jl

Forward Mode Automatic Differentiation for Julia
Other
892 stars 145 forks source link

Working with anonymous functions #694

Closed matheusdiogenesandrade closed 8 months ago

matheusdiogenesandrade commented 8 months ago

Hello.

I am working in application where I have to dynamically define a set of functions, and from time to time provide their respective jacobian and hessian. So, let's take a look at the snippet.

using ForwardDiff

const N::Int = 12;

x::Vector{Float64} = zeros(N * N)

c = []
for i in 1:10
    push!(c, x::Vector{Float64} -> sum(x))
end

ForwardDiff.gradient(c[1], x)

Which dumps the following error.

ERROR: LoadError: MethodError: no method matching (::var"#3#4")(::Vector{ForwardDiff.Dual{ForwardDiff.Tag{var"#3#4", Float64}, Float64, 12}})
Closest candidates are:
  (::var"#3#4")(::Vector{Float64}) at ~/code.jl:9
Stacktrace:
 [1] chunk_mode_gradient(f::var"#3#4", x::Vector{Float64}, cfg::ForwardDiff.GradientConfig{ForwardDiff.Tag{var"#3#4", Float64}, Float64, 12, Vector{ForwardDiff.Dual{ForwardDiff.Tag{var"#3#4", Float64}, Float64, 12}}})
   @ ForwardDiff ~/.julia/packages/ForwardDiff/PcZ48/src/gradient.jl:123
 [2] gradient(f::Function, x::Vector{Float64}, cfg::ForwardDiff.GradientConfig{ForwardDiff.Tag{var"#3#4", Float64}, Float64, 12, Vector{ForwardDiff.Dual{ForwardDiff.Tag{var"#3#4", Float64}, Float64, 12}}}, ::Val{true})
   @ ForwardDiff ~/.julia/packages/ForwardDiff/PcZ48/src/gradient.jl:21
 [3] gradient(f::Function, x::Vector{Float64}, cfg::ForwardDiff.GradientConfig{ForwardDiff.Tag{var"#3#4", Float64}, Float64, 12, Vector{ForwardDiff.Dual{ForwardDiff.Tag{var"#3#4", Float64}, Float64, 12}}}) (repeats 2 times)
   @ ForwardDiff ~/.julia/packages/ForwardDiff/PcZ48/src/gradient.jl:17
 [4] top-level scope
   @ ~/code.jl:12
 [5] include(fname::String)
   @ Base.MainInclude ./client.jl:476
 [6] top-level scope
   @ REPL[1]:1
in expression starting at ~/code.jl:12

Is there anyway for retrieving the jacob- and hess-ian of an anonymous function?

Thanks and regards.

mcabbott commented 8 months ago

x::Vector{Float64} -> sum(x)

This function only accepts Float64, not Dual, hence the error. You don't gain anything from such type specifications, and should probably just write x -> sum(x) (which is sum, but I guess your real functions are more complex).

matheusdiogenesandrade commented 8 months ago

Thanks.