SciML / SciMLBenchmarks.jl

Scientific machine learning (SciML) benchmarks, AI for science, and (differential) equation solvers. Covers Julia, Python (PyTorch, Jax), MATLAB, R
https://docs.sciml.ai/SciMLBenchmarksOutput/stable/
MIT License
318 stars 80 forks source link

CUTEST.jl benchmark wrappers #935

Open ChrisRackauckas opened 4 months ago

ChrisRackauckas commented 4 months ago

https://github.com/JuliaSmoothOptimizers/CUTEst.jl is a set of nonlinear optimization problems. We should use those to setup benchmarks between the optimizers, splitting between constrained and unconstrained and using as many optimizers as possible work-precsion.

alonsoC1s commented 1 month ago

I'm in the process of familiarizing myself with the structure of the involved packages and would like to see if I'm getting it so far

If I understand correctly, the first step to solve this issue would be to define a way to convert from the NLPModel instances that are currently used in CUTEst.jl to a subtype of SciMLBase.OptimizationProblem. Is there a relevant issue in the CUTEst.jl repo so that I can start a PR over there?

ChrisRackauckas commented 1 month ago

There is not an issue I know of in that repo. It would probably make sense to make an NLPModels extension to Optimization.jl

alonsoC1s commented 1 month ago

I agree, it makes sense to handle it inside the SciML umbrella. Should I open an issue and ask for comments on the Optimization.jl repo then?