JuliaSmoothOptimizers / QuadraticModels.jl

Data structures for linear and quadratic optimization problems based on NLPModels.jl
Other
16 stars 10 forks source link

Add GPU tests #125

Open tmigot opened 5 months ago

tmigot commented 5 months ago

With NLPModels 0.21, we try to support the NLPModel API for CuArray types.

For instance, things like that are being done: https://github.com/JuliaSmoothOptimizers/NLPModelsTest.jl/blob/3409ae7a292a1e7baa5955787c2db56aa42e386f/test/runtests.jl#L76

I am suspecting most of this package is compatible already, but maybe not 100%, e.g. https://github.com/JuliaSmoothOptimizers/QuadraticModels.jl/blob/main/src/qps.jl

tmigot commented 5 months ago

@geoffroyleconte I think you did some tests regarding this, do you think we could reuse that?

geoffroyleconte commented 5 months ago

Not everything is compatible. I managed to create some QuadraticModels and use them with CUDA using for example https://github.com/JuliaSmoothOptimizers/RipQP.jl/issues/289#issuecomment-2081660874 .

However, SlackModel in NLPModelsModifiers does not work. I had to do a workaround (that can probably be improved) in RipQP for SparseMatricesCOO.jl: https://github.com/JuliaSmoothOptimizers/RipQP.jl/blob/2b697a913dec98410bea6b63be49c71fd2070ff9/src/gpu_utils.jl#L188-L230

The presolve in QuadraticModels does not work (uses scalar indexing at several places).

I also think that some functions will not work with CUDA.allowscalar(false), like https://github.com/JuliaSmoothOptimizers/QuadraticModels.jl/blob/0c05ef84f76410ed818a469371f2f8aff6336e48/src/qpmodel.jl#L347C20-L347C25

tmigot commented 5 months ago

Thanks @geoffroyleconte for the feedback! The last release of NLPModelsModifiers, 0.7.2, should have a SlackModel GPU-compatible, if you can check?

Adding unit tests for this seems even more a good idea to see what are the limitations and the things we could fix.

geoffroyleconte commented 5 months ago

It looks good, even though i'm not sure to understand what this does: https://github.com/JuliaSmoothOptimizers/NLPModelsModifiers.jl/blob/d90bd8e564542d2654ac83596f9a3c1b513cc197/src/slack-model.jl#L162

But what is important is for this function to be implemented on the GPU: https://github.com/JuliaSmoothOptimizers/QuadraticModels.jl/blob/0c05ef84f76410ed818a469371f2f8aff6336e48/src/qpmodel.jl#L548-L572 because (at least for RipQP) I find it is more convenient to work with the linear algebra data of the data field of a QuadraticModel than with the functions of the NLPModels API.

Also, by looking at the functions for the constraints of SlackModel (even though I do not use them), some look like they use scalar indexing.

In the tests of NLPModelsModifiers, you use CUDA.allowscalar() : https://github.com/JuliaSmoothOptimizers/NLPModelsModifiers.jl/blob/d90bd8e564542d2654ac83596f9a3c1b513cc197/test/gpu_test.jl#L5 but I think you should do CUDA.allowscalar(false) if you want to verify that you don't use scalar indexing no? @amontoison can you confirm? https://cuda.juliagpu.org/stable/usage/workflow/#UsageWorkflowScalar

amontoison commented 5 months ago

I confirm that you need CUDA.allowscalar(false) to test that the code is working correctly on GPU.

tmigot commented 5 months ago

Note guys that we are still at the beginning of this move and only at stage 1-2 of this workflow: https://cuda.juliagpu.org/stable/usage/workflow/#UsageWorkflowScalar so it is to be expected that we still need CUDA.allowscalar(false).

I think: a) there are parts of the API that will never be really efficient on GPUs; b) we will slowly update or add existing implementations :).