SciML / Optimization.jl

Mathematical Optimization in Julia. Local, global, gradient-based and derivative-free. Linear, Quadratic, Convex, Mixed-Integer, and Nonlinear Optimization in one simple, fast, and differentiable interface.
https://docs.sciml.ai/Optimization/stable/
MIT License
688 stars 75 forks source link

Add OptimizationOptimJL.Optim.BFGS() tests for issues 747, 754 #758

Closed sjdaines closed 1 month ago

sjdaines commented 1 month ago

Failing test for https://github.com/SciML/Optimization.jl/issues/754 Passing test for https://github.com/SciML/Optimization.jl/issues/747

Partially addresses https://github.com/SciML/Optimization.jl/issues/755

NB: unrelated problem, tests using Optimization.AutoModelingToolkit() were failing, added code @test (sol = solve(prob, Optim.Newton())) isa Any # test exception not thrown and similar so exception is caught and other tests run

Checklist

Additional context

Add any other context about the problem here.

ChrisRackauckas commented 1 month ago

Failing test for https://github.com/SciML/Optimization.jl/issues/754

I don't see an @test_broken in the PR?

codecov[bot] commented 1 month ago

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Project coverage is 67.46%. Comparing base (e2cad3d) to head (764c066). Report is 78 commits behind head on master.

Additional details and impacted files ```diff @@ Coverage Diff @@ ## master #758 +/- ## ========================================== + Coverage 63.85% 67.46% +3.61% ========================================== Files 22 22 Lines 1527 1632 +105 ========================================== + Hits 975 1101 +126 + Misses 552 531 -21 ```

:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.

sjdaines commented 1 month ago

Converted to draft PR as I think there's a couple of problems here with how the tests are organized, which are defeating attempts to maintain the tests (including mark tests with @test_broken)

  1. Errors due to AutoModelingToolkit (every case is erroring)
  2. The part that fails is often outside @test, which then causes the whole test file to error, eg https://github.com/SciML/Optimization.jl/blob/15bd5a107ba585f45566f5f8ca41e005f8a4e9d9/lib/OptimizationOptimJL/test/runtests.jl#L159-L162 the error is generated by solve on L161. This then errors out to the top level, so that the rest of the tests do not run.

Is there some example of best practice for how tests like this should be structured, so tests are independent ie a failure in one test is contained so the rest are still run ? Use @testset ? Some convenient way to achieve @test_nothrow ?

Looks like the net result of 1. and 2. combined is that the tests for OptimizationOptimJL have been failing for the last few releases.

sjdaines commented 1 month ago

Closing this, as a restructuring of the tests needs looking at more systematically, and new tests really belong in the PRs with the fixes eg https://github.com/SciML/Optimization.jl/pull/759 (I've added a comment there linking to the suggested new tests from this PR)