JuliaSmoothOptimizers / ADNLPModels.jl

Other
29 stars 12 forks source link

Use better colorings #246

Open gdalle opened 2 weeks ago

gdalle commented 2 weeks ago

As of #244, column coloring is used everywhere, for Jacobians and Hessians alike. However:

SparseMatrixColorings.jl also provides decompression utilities, which you may want to use for column and star coloring instead of recoding them.

Of course my recommendation is to use DifferentiationInterface.jl and stop worrying about all of that

gdalle commented 2 weeks ago

Apparently the lines to modify are

https://github.com/JuliaSmoothOptimizers/ADNLPModels.jl/blob/855c4878632a18702543ac1967fa8dd1fd08f090/src/sparse_hessian.jl#L208-L216

I can cook up a PR this weekend

amontoison commented 2 weeks ago

It will be fantastic Guillaume!

gdalle commented 1 week ago

I have taken a look and it's not possible with the current state of the code, because you interleave differentiation and decompression: evaluate the matrix-vector product for one color, dispatch into the relevant columns, rinse and repeat. Meanwhile, SparseMatrixColorings.jl allows a complete separation of these two aspects. But that means you need to allow a larger buffer for the differentiation, of size (rows, column_colors) or (row_colors, columns). As an example, here's how it's done in DI: https://github.com/gdalle/DifferentiationInterface.jl/blob/b41db2ff5791d53e669e641a823effa957c42ff6/DifferentiationInterface/src/sparse/jacobian.jl#L75-L105

amontoison commented 1 week ago

Should we start to switch to DI.jl?

gdalle commented 1 week ago

Do you have benchmarks to monitor possible performance regressions? I can't guarantee that DI will be everywhere as good as hand-tuned code that works for a specific backend

tmigot commented 1 week ago

Agree with @gdalle , we should move forward on the benchmarking side. #248 is for the gradient, but then no difficult to extend the benchmark to more.