Closed amontoison closed 2 months ago
I am rather confused why you are using unconstrained tools (ufn, uofg, ugr) on a constrained problem. You need cfn, etc
@amontoison yes you need to use the constrained tools (cfn, cofg, cgr). The Python interface automatically selects the correct tools depending on whether the problem is constrained or not.
@jfowkes Is it possible to just evaluate the objective if we have a constrained problem? Same thing with the gradient of the objective?
@amontoison there is now cifn
available for just evaluating the objective.
@amontoison there is also cigr
for evaluating just the objective gradient for constrained problems.
In CUTEst.jl, obj(nlp, x)
doesn't call uofg
; it calls ufn
or cfn
depending on whether there are constraints or not: https://github.com/JuliaSmoothOptimizers/CUTEst.jl/blob/main/src/julia_interface.jl#L56. I don't know what obj2
and grad2
are, but I hope they either get removed on the basis of their name, or that they are renamed.
@dpo It was just a way to compare the new version of obj
and grad!
with the old one.
If we just evaluate the objective (obj
), we don't want to also evaluate the constraints in the same time.
The user can calls objcons
if he wants that.
In the same spirit, we don't want to compute the Jacobian if the user calls grad!
.
Done in https://github.com/JuliaSmoothOptimizers/CUTEst.jl/pull/375
I close the issue, thank you Nick and Jari for you help!
My biggest issue now is that a modification was made here between CUTEst v2.0.7 and v2.0.27 that deallocates memory allocated by C or Julia (it's not allowed!!!). It leads to a segmentation fault because the garbage collector probably tries to access a null pointer. 😩
Ok, I think I found the culprit.
With a recent commit, the size of the vector time
in the routines ureport
and creport
was changed.
It was previously 2 and is now 4.
Debugging this was quite a challenge...
I compiled CUTEst with several options (-g
, -fbounds-check
, -fcheck=all
) to detect memory leaks and other issues, and I also encountered the following error:
At line 14 of file ../src/tools/unames.F90
Fortran runtime error: Actual string length is shorter than the declared one for dummy argument 'pname' (1/10)
I am sure that I am providing an argument pname
of size 10, so I don’t understand this error.
On the bright side, the Julia interface is now working in both single and double precision across all platforms (Linux, Intel Mac, Mac Silicon, Windows).
I found a way to provide a small MinGW artifact (127 MB) that is automatically installed at runtime for Windows users and includes a gfortran.exe
.
This compiles the decoded SIF files transparently for users, so they don’t need to install anything like GALAHAD.jl
.
The new sifdecoder_standalone
also helped a lot.
The next step is to support quadruple precision in the Julia interface.
Yes, the time array length was a bug on the fortran side that we had to fix.
What is the call to unames that gives this 'pname' error?
It was:
status = Cint[0]
n = nlp.meta.nvar
m = nlp.meta.ncon
pname = Vector{Cchar}(undef, 10)
vname = Matrix{Cchar}(undef, 10, n)
if m == 0
cutest_unames_(status, Cint[n], pname, vnames)
end
The SIF problem was "BROWNDEN"
(the first problem in my unit tests).
status = Cint[0] n = nlp.meta.nvar m = nlp.meta.ncon pname = Vector{Cchar}(undef, 10) vname = Matrix{Cchar}(undef, 10, n) if m == 0 cutest_unames_(status, Cint[n], pname, vnames) end
@amontoison I think you want to pass in
vname
notvnames
intounames()
right?
Yes, it's vname
. I just did a typo when I copy-pasted the code here but I have a correct code in CUTEst.jl.
@amontoison it's entirely possible there is a bug in the unames
C interface as I have never tested it, in the Python interface we use pbname, varnames, connames
instead of unames
.
I wonder ... in GALAHAD, when we move from fortran to C characters, we have to account for the extra null that C expects at the end of a string when using the standard fortran c bindings. The strings on the C side are always one character longer to account for this. But we don't do this in the C interface to CUTEr that @dpo wrote, and that is now part of cutest.h. @jfowkes, the map in varnames looks identical to that in unames, so I am not sure why it would fail for one and not the other. @amontoison what happens if you use probname instead of unames to examine pname? Is this just the compiler being picky with your debug flags?
It might be that we will have to write CUTESTCint* variants for all CUTEst tools that have character dummy arguments; there aren't very many, just u/cnames, probname, varnames, connames. I'll leave this to one of you.
I don't have any warning with probname
if I use it instead of unames
.
I suspect that gfortran
is just picky.
For the C characters, I have two options in Julia. I can specify if a string is null-terminated (Cstring
) or not (Ptr{Cchar}
).
So it's not an issue for me.
But for interoperability with C users, CUTEST_Cstring_...
is relevant.
Nick, did you do some some unit tests of the quadruple precision version of CUTEst? Just to be sure that I don't try to find bugs in the Julia interface for nothing.
@amontoison you can run some unit tests for quadruple precision with:
$CUTEST/bin/runcutest -A ${{ matrix.arch }} --quadruple -p test --decode ALLINITU
$CUTEST/bin/runcutest -A ${{ matrix.arch }} --quadruple -p test --decode ALLINITC
which is equivalent to what we're doing for single and double precision in ci.yml
. We should probably add this to the CI (not sure if the CI compiles in quadruple precision atm).
There are more comprehensive tests (of each subroutine), but these are not yet enabled in quad. Sometime soon, when I get the time
OK, done. For the makefile version, it's
make -s -f /home/nimg/Dropbox/fortran/optrove/cutest/makefiles/pc64.lnx.gfo PRECIS=quadruple run_test_cutest
from ./src/test, I'm sure you will know the meson magic !
Please remember, this will only work with compilers that support real128
I will update ci.yml
to include this test. ci.yml
depends on the "makemaster" build system so it's easy to add it.
Good news guys, CUTEst.jl
is also working in quadruple precision now.
It means that if one day we want to support quadruple precision in GALAHAD
, it will be less than one day of work for using it in the Julia interface.
julia> using CUTEst, Quadmath, NLPModels # automatically download the SIF collection for the user
julia> x = rand(2)
2-element Vector{Float64}:
0.5547518833473369
0.6324463397132569
julia> nlp = CUTEstModel("HS14", precision = :single)
Problem name: HS14
All variables: ████████████████████ 2 All constraints: ████████████████████ 2
free: ████████████████████ 2 free: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
lower: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 lower: ██████████⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 1
upper: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 upper: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
low/upp: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 low/upp: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
fixed: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 fixed: ██████████⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 1
infeas: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 infeas: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
nnzh: ( 33.33% sparsity) 2 linear: ██████████⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 1
nonlinear: ██████████⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 1
nnzj: ( 0.00% sparsity) 4
julia> x_single = Float32.(x)
2-element Vector{Float32}:
0.5547519
0.63244635
julia> obj(nlp, x_single)
2.2238379f0
julia> cons(nlp, x_single)
2-element Vector{Float32}:
0.28985918
0.52307415
julia> hess(nlp, x_single)
2×2 Symmetric{Float32, SparseMatrixCSC{Float32, Int64}}:
2.0 ⋅
⋅ 2.0
julia> nlp = CUTEstModel("HS14", precision = :double)
Problem name: HS14
All variables: ████████████████████ 2 All constraints: ████████████████████ 2
free: ████████████████████ 2 free: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
lower: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 lower: ██████████⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 1
upper: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 upper: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
low/upp: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 low/upp: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
fixed: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 fixed: ██████████⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 1
infeas: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 infeas: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
nnzh: ( 33.33% sparsity) 2 linear: ██████████⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 1
nonlinear: ██████████⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 1
nnzj: ( 0.00% sparsity) 4
julia> x_double = Float64.(x)
2-element Vector{Float64}:
0.5547518833473369
0.6324463397132569
julia> obj(nlp, x_double)
2.223837811878252
julia> cons(nlp, x_double)
2-element Vector{Float64}:
0.2898592039208232
0.5230742143639493
julia> hess(nlp, x_double)
2×2 Symmetric{Float64, SparseMatrixCSC{Float64, Int64}}:
2.0 ⋅
⋅ 2.0
julia> nlp = CUTEstModel("HS14", precision = :quadruple)
Problem name: HS14
All variables: ████████████████████ 2 All constraints: ████████████████████ 2
free: ████████████████████ 2 free: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
lower: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 lower: ██████████⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 1
upper: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 upper: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
low/upp: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 low/upp: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
fixed: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 fixed: ██████████⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 1
infeas: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 infeas: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
nnzh: ( 33.33% sparsity) 2 linear: ██████████⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 1
nonlinear: ██████████⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 1
nnzj: ( 0.00% sparsity) 4
julia> x_quadruple = Float128.(x)
2-element Vector{Float128}:
5.54751883347336938179239496093941852e-01
6.32446339713256922010486960061825812e-01
julia> obj(nlp, x_quadruple)
2.22383781187825211305610387984903636e+00
julia> cons(nlp, x_quadruple)
2-element Vector{Float128}:
2.89859203920823094158265575970290229e-01
5.23074214363949287782027980068469322e-01
julia> hess(nlp, x_quadruple)
2×2 Symmetric{Float128, SparseMatrixCSC{Float128, Int64}}:
2.00000000000000000000000000000000000e+00 ⋅
⋅ 2.00000000000000000000000000000000000e+00
Excellent. Thanks, Alexis. I suppose that we can think of 128bit GALAHAD, but the main issue will be lack of suitable lapack/blas (other than the ones that we compile). At least now that we have the HSL subset, it is trivial to extend HSL to 128bit, so we do have some useful sparse solvers. Goodness knows what will happen with SPRAL or MUMPS, though.
@jfowkes Do you observed the same issue with the Python interface? I remarked that with the problem
"HS36"
: