JuliaManifolds / Manopt.jl

🏔️Manopt. jl – Optimization on Manifolds in Julia
http://manoptjl.org
Other
314 stars 40 forks source link

New benchmark #339

Closed mateuszbaran closed 6 months ago

mateuszbaran commented 8 months ago

To compare performance of Manopt.jl and Optim.jl. TODO:

codecov[bot] commented 8 months ago

Codecov Report

Attention: 3 lines in your changes are missing coverage. Please review.

Comparison is base (8851619) 99.45% compared to head (649bd55) 99.57%. Report is 2 commits behind head on master.

:exclamation: Current head 649bd55 differs from pull request most recent head 92db97e. Consider uploading reports for the commit 92db97e to get more accurate results

Files Patch % Lines
src/solvers/quasi_Newton.jl 25.00% 3 Missing :warning:
Additional details and impacted files ```diff @@ Coverage Diff @@ ## master #339 +/- ## ========================================== + Coverage 99.45% 99.57% +0.12% ========================================== Files 69 69 Lines 6418 6402 -16 ========================================== - Hits 6383 6375 -8 + Misses 35 27 -8 ```

:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.

kellertuer commented 8 months ago

Instead of StopWhenGradientInfNormLess one could also consider a norm= keyword for StopWhenGradientNormLess – but that should still default to the 2-norm, since

I am not sure why strong Wolfe errors, that would be interesting to know.

We could also remove the two existing benchmarks since I never ran them – in the long run we could set up a benchmark CI, but I have not yet fully understood how they work and which packages one would use for that.

mateuszbaran commented 8 months ago

The test failure seems unrelated to my changes (it's the ALM solver).

kellertuer commented 8 months ago

Well, ALM has a subsolver that often is L-BFGS. So if you ALM breaks, Quasi Newton changed (maybe an error, but sometimes ALM is also a bit unstable). But it is an effect of changing L-BFGS.

mateuszbaran commented 8 months ago

OK, then I will pay attention to it.

mateuszbaran commented 8 months ago

Ref. https://github.com/JuliaNLSolvers/LineSearches.jl/issues/173 .

mateuszbaran commented 8 months ago

I've created the package for HZ: https://github.com/mateuszbaran/ImprovedHagerZhangLinesearch.jl .

kellertuer commented 8 months ago

Thanks for the discussion and this nice solution then :)

mateuszbaran commented 8 months ago

I don't really like this solution but it seems to be least bad.

kellertuer commented 8 months ago

I also think it is not the optimal solution, but I also do agree, that thoroughly working through that code, making it (1) more Manopt.jl-like and (b) documenting it thoroughly might take more time indeed.

mateuszbaran commented 7 months ago

I'm trying to slowly wrap this up. How would you organize the hyperparameter tuner? It's a mix of somewhat generic code and example-specific code. ObjectiveData is technically generic but I'd imagine most people using it would have to tweak it anyway -- there is simply too many fine details to cover every possible use case through an API. Probably ManoptExamples.jl would be the right place, at least for now?

kellertuer commented 7 months ago

Hi, I would first have to check what that function does, but will try to find time for that the next few days then, but yeah, ManoptExamples would probably ne a good place. Note that in there we also already have Rosenbrock.

kellertuer commented 7 months ago

If you would prefer having scripts that can be run easier (than Quarto Notebooks) we could also look into the new Quarto Scripts which I wanted to try on something anyways.

mateuszbaran commented 7 months ago

This looks like a very great start to a thorough Benchmark.

My maybe most central remark or question is

How would we add this in a consistent way? The benchmark_comparison.jl for now is a script; should that become something we can run every now and then on a CI? Should the results be part of the docs? They could then be updated on a branch whenever the benchmark is run (either on CI and committed or when run manually).

Currently I'm leaning towards turning it into a "how to choose the right solver (and its options) for your problem" tutorial. I'm not sure how (and for what problems) run it on CI. Note that the optimization script is quite demanding computationally despite all that advanced machinery.

Does the benchmark run several solvers and several examples? This could maybe be modularized

I've experimented with several examples but I haven't decided which one to use for an example. Most likely not Rosenbrock on sphere but I will decide when we select the right format.

If you would prefer having scripts that can be run easier (than Quarto Notebooks) we could also look into the new Quarto Scripts which I wanted to try on something anyways.

I really liked Quarto notebooks until updating Julia broke my settings and I could not get it back to work for a couple of hours (it still doesn't work, I just gave up; it makes the Jupyter notebook but the Julia kernel refuses to run). That might be the main problem with turning it into a tutorial.

kellertuer commented 7 months ago

I really liked Quarto notebooks until updating Julia broke my settings and I could not get it back to work for a couple of hours (it still doesn't work, I just gave up; it makes the Jupyter notebook but the Julia kernel refuses to run). That might be the main problem with turning it into a tutorial.

Remember to recompile IJulia. That is one of the main reasons I did so much Pkg.activate()... stuff in the documentation. If it happens that a new Julia version comes along, at least recompiling IJulia is crucial.

I would also be fine with these being benchmarks and a bit less tutorial focussed.

mateuszbaran commented 7 months ago

Remember to recompile IJulia. That is one of the main reasons I did so much Pkg.activate()... stuff in the documentation. If it happens that a new Julia version comes along, at least recompiling IJulia is crucial.

I did recompile IJulia. This script actually needs to run with the Conda.jl Python but I'm not sure how to run the notebook in its Jupyter. Or I did something wrong trying.

mateuszbaran commented 7 months ago

I just tried again, activating the Conda.jl environment. quarto render fails throwing "No module named yaml" despite me being able to import it when I run python REPL. Directly running the browser notebook complains about IJulia not being installed despite me just doing build IJulia a moment ago.

kellertuer commented 7 months ago

Oh Python depedencies – I nearly never manage to get that right. But from Julia with CondaPkg.jl (and their config file) I got it consistent.

mateuszbaran commented 6 months ago

I've moved the interesting parts of this PR to separate PRs so this one can be closed I think.

mateuszbaran commented 6 months ago

BTW, I tried to address some of your comments in the version submitted to ManoptExamples.