Closed dpo closed 7 years ago
There doesn't seem to be a supertype for generators, so it looks like I have to use Any
.
Not sure what's going on here: https://travis-ci.org/JuliaSmoothOptimizers/Optimize.jl/jobs/169105504#L569
There remain many instances of Symbols, :trunk instead of trunk
Why not add a solve_problems_filtered that remove the skipped instances from the stats ?
In #28 I used stats, a dict of Function but I prefer the solution here, a dict of Symbol which may represent Symbol(function_name).
I first only focused on the two_solvers()
benchmark example. Symbols should be removed now.
There's a new optional argument prune
to solve_problems()
which defaults to true
and removes skipped problems from stats
. That means the number of problems can only be determined after the benchmarks were run. That now happens in profile_solvers()
.
Finally, bmark_and_profile()
returns both the stats and profile.
I guess this error with Julia 0.4 is due to generator expressions only being available in 0.5?
Looks like it. Maybe use array compreheension to pass?
Great idea.
How to modify this branch, which is already a PR? A PR within a PR?
For the moment, I have only the following 2 small corrections:
More general concerns.
I fixed "profiles vs. profile" and print the solver name. The discussion about stopping criteria should go to a general issue (it's not strictly related to this PR).
OK. Other small suggestions:
Remark: when executing the Pkg.test, problem's names are prefixed by "OptimizationProblems". I would like to remove the prefix. No such prefix is displayed when I benchmark ARCTR and I don't know why.
I think printing the name of skipped problems will pollute the output quite a bit. In the near future, I think we should use logging in all packages. That will allow us to direct messages to a file or somewhere else so we only see essential information on the screen. Does that sound ok?
Running benchmark
on Travis is tricky because it requires the AMPL models. I guess we could include just a few, but does that really belong in this package? Should the benchmark stuff be taken out of here?
julia> using OptimizationProblems
julia> p = dixmaane
dixmaane (generic function with 2 methods)
julia> string(p)
"OptimizationProblems.dixmaane"
I don't see what is different when I benchmark ARCTR variants, but I get: the prefix ARCTR for the solver, but no prefix for the problems. Again, this discussion probably does not belong to this PR, but this behaviour remains a mystery. The ARCTR benchmarks are performed using the function compare_solvers, which would be a nice addition to Optimize.
function compare_solvers(solvers,probs) bmark_args = Dict{Symbol, Any}(:skipif => model -> model.meta.ncon > 0) profile_args = Dict{Symbol, Any}(:title => "f+g+hprod") stats, profiles = bmark_and_profile(solvers, (MathProgNLPModel(eval(p)(n), name=string(p)) for p in probs), bmark_args=bmark_args, profile_args=profile_args) return stats, profiles end
I added a few nl files so the benchmark functions in examples/benchmark.jl
are easier to run, but it's difficult to run them on Travis because profiles are not available on Julia 0.4. If we decide to drop Julia 0.4, things will become simpler.
Thanks
This PR changes the benchmark tools so that
AbstractNLPModel
s but are generators so the models are only instantiated when needed. Thusrun_mpb_problem()
andrun_ampl_problem()
are no longer useful.There is some overlap with #28.