Closed dmaldona closed 4 years ago
@michel2323 , @frapac : feel free to add/modify.
Thanks for opening this issue! I think more about it today. Yesterday, we discussed about implementing the solve
function as:
function solve(pf::PF, u::AbstractVector, x0::AbstractVector)
...
end
with PF
a power flow object storing the data of the problem, u
the current decision variables, and x0
an (optional) initial guess.
The more I think about it, the more I believe that the function solve
should compute a feasible state variable x
(depending on the current u
), but also the adjoint w.t.r. u
(which is equivalent to the reduced gradient we discussed about).
A nice way to implement this would be to add a dependence to ChainRulesCore.jl, and add a function frule
returning a solution x
and the corresponding sensitivity ∂X
:
function frule((Δself, Δargs...), ::typeof(foo), args...; kwargs...)
...
return x, ∂X
end
@frapac I like your idea, although I am not familiar with ChainRulesCore.jl.
I do have once concern. The solution of g(x, u)
, which is our solve
, could be replaced by a method that does not require the computation of the Jacobian matrices. In particular, I wanted to explore in the future the use of Padé approximants. In this case I was wondering if we should require solve
to compute the adjoints.
Refactoring is complete.
In the current form, the newton-rhapson routine includes code that needs to be externalized to accommodate optimization algorithms. This is a partial list:
Reviewers: @michel2323 , @frapac