Open JOlucak opened 5 months ago
There are a few things we should do:
qpsol
syntax to conic
.sdpsol
and/or call conic
directly (but avoid code redundancy).Also, representing the coefficient matrix using MX
variables might have an impact.
Current status
A first version to analytically calculate dg/dQg where g are the constraint functions and Qq the matrix decision variables. The implementation was/ is tested with the GTM example in different ways:
We receive the same results. The computation of the analytical part can be neglect (compared to other more demanding parts) but can be implemented more efficiently (first version!).
The third test varied the polynomial and SOS decision variables (max degree). The "new" approach works but is only slightly more efficient for a few reasons:
I will continue working on that, altough I must admit I don't know how we can potentially speed up the calculation of dg/dx.
Additional remarks:
I found a weird behaviour I can't explain.
If we set up a problem, rather large, and compute the jacobian for different parts (instead of for all dec. var at once), the computation times differ a lot.
dg/dQgram_g ~ 6s dg/dQlin_x ~400s
Now the weird part is the size g = 72295 x 1 SX Qgram_g = 1123544x1 SX Qlin_x = 1x4350 SX
dgdQgram_g = 72295 x 1123544 dgdQlin_x = 72295 x 4350
I don't understand why the latter one takes so much longer. g is the same.
the expressions in Qgram_g and Qlin_x are just SX variables that are the coefficients/decision variables. So not really a difference in the expressions.
Espacially, the last screenshot shows an ecpression of g and the Qgram_g and Qlin_x. I don't see a reason why it is more difficult to compute for Qlin_X
See screenshots
Remark to previous comment.
The variables of Qgram_g
enter into the constraint in a very structured manner, i.e., the partial derivative is either zero or an integer. On the other hand, Qlin_x
enters as arbitrary expression.
It seems that the jacobian calculation and simplification in SdpSolInternal are responsible for the high computational effort in the solver building process. See first figure. Currently, only simple cost functions (level sets) are considered. I think if more sphisticated cost functions are used in the future this becomes even more a bottleneck since the hessian calculation becomes more difficult.
For small sized problems (such as VDP) this is not a bigger deal. However, for medium size (6 states) and higher order polynmials (e.g. 6) we run into high numerical effort. See figure 2.
I am aware that polynomials of degree 6 are already quiet large but if we want to solve larger systems in the future or to have less conservative results this should be improved. Is it possible to improve this computation in casadi? What else is possible?