JuliaDiff / FiniteDiff.jl

Fast non-allocating calculations of gradients, Jacobians, and Hessians with sparsity support
Other
241 stars 39 forks source link

What am I doing wrong withe this gradient computation? #176

Closed martinmestre closed 10 months ago

martinmestre commented 10 months ago
f(x,y)=y*x^2
FiniteDiff.finite_difference_gradient(f,[1.,1.])

ROR: MethodError: no method matching ^(::Vector{Float64}, ::Int64)

Closest candidates are:
  ^(::Union{AbstractChar, AbstractString}, ::Integer)
   @ Base strings/basic.jl:733
  ^(::LinearAlgebra.Hermitian, ::Integer)
   @ LinearAlgebra ~/.julia/juliaup/julia-1.9.0+0.x64.linux.gnu/share/julia/stdlib/v1.9/LinearAlgebra/src/symmetric.jl:697
  ^(::LinearAlgebra.Hermitian{T, S} where S<:(AbstractMatrix{<:T}), ::Real) where T
   @ LinearAlgebra ~/.julia/juliaup/julia-1.9.0+0.x64.linux.gnu/share/julia/stdlib/v1.9/LinearAlgebra/src/symmetric.jl:708
  ...

Stacktrace:
 [1] literal_pow
   @ ./intfuncs.jl:338 [inlined]
 [2] f(x::Vector{Float64})
   @ Main ./REPL[13]:1
 [3] finite_difference_gradient!(df::Vector{Float64}, f::typeof(f), x::Vector{Float64}, cache::FiniteDiff.GradientCache{Nothing, Nothing, Nothing, Vector{Float64}, Val{:central}(), Float64, Val{true}()}; relstep::Float64, absstep::Float64, dir::Bool)
   @ FiniteDiff ~/.julia/packages/FiniteDiff/grio1/src/gradients.jl:318
 [4] finite_difference_gradient(f::typeof(f), x::Vector{Float64}, fdtype::Val{:central}, returntype::Type, inplace::Val{true}, fx::Nothing, c1::Nothing, c2::Nothing; relstep::Float64, absstep::Float64, dir::Bool)
   @ FiniteDiff ~/.julia/packages/FiniteDiff/grio1/src/gradients.jl:133
 [5] finite_difference_gradient(f::Function, x::Vector{Float64}, fdtype::Val{:central}, returntype::Type, inplace::Val{true}, fx::Nothing, c1::Nothing, c2::Nothing)
   @ FiniteDiff ~/.julia/packages/FiniteDiff/grio1/src/gradients.jl:99
 [6] finite_difference_gradient(f::Function, x::Vector{Float64})
   @ FiniteDiff ~/.julia/packages/FiniteDiff/grio1/src/gradients.jl:99
 [7] top-level scope
   @ REPL[116]:1

I also tried with FiniteDiff.finite_difference_gradient(f,1.,1.) Thank you very much

ChrisRackauckas commented 10 months ago

As documented, it's a vector function.

using FiniteDiff
f(x)=x[2]*x[1]^2
FiniteDiff.finite_difference_gradient(f,[1.,1.])