-
Hi,
I am having trouble implementing optax LBFGS with equinox types. I am trying to run a linear regression model using this notebook https://github.com/ubcecon/ECON622/blob/master/lectures/lectures/…
-
Hi, I'm using Optimization.jl in my package: [DJUICE.jl](https://github.com/DJ4Earth/DJUICE.jl) to optimize a cost function. The example is [here](https://github.com/DJ4Earth/DJUICE.jl/blob/test_opt…
-
I am assuming the non-parallel one is for reference and will disappear once the parallel one works.
-
Hi Daniel,
I feel puzzled about the normlization and denormalization of gradients in LBFGS. Why the gradients multiply `C_vp` and `C_rho` in both normalization and denormalization? It will make the …
-
**Issue Summary**
A RecursionError Exception in `nn_examples.ipynb` causes the notebook to be unusable.
The error indicates an infinite recursion in `SKMLPRunner`, specifically within the `_mlp_l…
-
https://github.com/google-deepmind/optax/blob/main/examples/lbfgs.ipynb
-
It says to load Symbolics but I have loaded it already.
```jl
Julia 1.10.3
⌃ [a0c0ee7d] DifferentiationInterface v0.5.11
[f6369f11] ForwardDiff v0.10.36
[7f7a1694] Optimization v3.27.0
…
-
-
Although I know most of people use ADAM or SGD for the optimization of weight in NN. However, for some small NN architecture, the optimization method with line search (e.g., LBFGS) would be far more e…
-
Are there any plans on implementing the LBFGS optimizer? It seems to perform better and faster than ADAM for small sample sizes in my experience with sklearn python. Having this implemented will help …