Open gdalle opened 2 weeks ago
Apparently the lines to modify are
I can cook up a PR this weekend
It will be fantastic Guillaume!
I have taken a look and it's not possible with the current state of the code, because you interleave differentiation and decompression: evaluate the matrix-vector product for one color, dispatch into the relevant columns, rinse and repeat.
Meanwhile, SparseMatrixColorings.jl allows a complete separation of these two aspects. But that means you need to allow a larger buffer for the differentiation, of size (rows, column_colors)
or (row_colors, columns)
.
As an example, here's how it's done in DI:
https://github.com/gdalle/DifferentiationInterface.jl/blob/b41db2ff5791d53e669e641a823effa957c42ff6/DifferentiationInterface/src/sparse/jacobian.jl#L75-L105
Should we start to switch to DI.jl?
Do you have benchmarks to monitor possible performance regressions? I can't guarantee that DI will be everywhere as good as hand-tuned code that works for a specific backend
Agree with @gdalle , we should move forward on the benchmarking side. #248 is for the gradient, but then no difficult to extend the benchmark to more.
As of #244, column coloring is used everywhere, for Jacobians and Hessians alike. However:
SparseMatrixColorings.jl also provides decompression utilities, which you may want to use for column and star coloring instead of recoding them.
Of course my recommendation is to use DifferentiationInterface.jl and stop worrying about all of that