MadNLP / DynamicNLPModels.jl

NLPModels for dynamic optimization
MIT License
11 stars 1 forks source link

Added support for non-default matrix types including `CuArray`s #27

Closed dlcole3 closed 2 years ago

dlcole3 commented 2 years ago

Updated src code to handle non-default matrix types, mainly by removing zeros and using similar instead. Updated runtests.jl to test on Float32 types and CuArray types.

These changes also enable forming the dense formulation as CuArrays, so that the H, J, lcon, ucon, lvar, and uvar are returned as CuArrays when the original arrays passed to LQDynamicModel are CuArrays.

dlcole3 commented 2 years ago

I did not realize that having a test on CUDA.jl would cause the tests to error out (though this makes a lot of sense). While these are removed form the test, running LQDynamicModel on my own machine with CuArrays did work and resulted in H and J being returned as CuArrays.

codecov-commenter commented 2 years ago

Codecov Report

Merging #27 (4eec098) into main (9c6aa9b) will increase coverage by 0.11%. The diff coverage is 98.87%.

@@            Coverage Diff             @@
##             main      #27      +/-   ##
==========================================
+ Coverage   97.60%   97.71%   +0.11%     
==========================================
  Files           1        1              
  Lines         668      701      +33     
==========================================
+ Hits          652      685      +33     
  Misses         16       16              
Impacted Files Coverage Δ
src/DynamicNLPModels.jl 97.71% <98.87%> (+0.11%) :arrow_up:

Continue to review full report at Codecov.

Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update 9c6aa9b...4eec098. Read the comment docs.

sshin23 commented 2 years ago

yes, we will need to set up a self-hosted runner. I'll set it up later today

sshin23 commented 2 years ago

you can set up runtests.jl in a way that it runs CUDA test only if one can detect an nvidia GPU. You can use CUDA.has_cuda_gpu() to check it