JuliaSmoothOptimizers / FluxNLPModels.jl

Other
6 stars 2 forks source link

`local_loss` should call increment obj #16

Open tmigot opened 1 year ago

tmigot commented 1 year ago

https://github.com/JuliaSmoothOptimizers/FluxNLPModels.jl/blob/d2cf7b98da9b979c87661aad96038a806df22f2f/src/utils.jl#L11

or have some specific counters.

farhadrclass commented 11 months ago

@tmigot would it be possible to explain more

tmigot commented 11 months ago

It felt like this function is actually evaluating the objective function, and is sometimes called without update of counters

farhadrclass commented 11 months ago

I agree, Also in objgrad!, can we improve this?

 x, y = nlp.current_training_minibatch
  f_w = nlp.loss_f(nlp.chain(x), y)
  g .= gradient(w_g -> local_loss(nlp, x, y, w_g), w)[1]