The TensCalc Matlab toolbox provides an environments for performing nonlinear constrained optimization.
The variables to be optimized can be multi-dimensional arrays of any dimension (tensors) and the cost functions and inequality constraints are specified using Matlab-like formulas.
Interior point methods are used for the numerical optimization, which uses formulas for the gradient and the hessian matrix that are computed symbolically in an automated fashion.
The package can either produce optimized Matlab code or C code. The former is preferable for very large problems, whereas the latter for small to mid-size problems that need to be solved in just a few milliseconds. The C code can be used from inside Matlab using an (automatically generated) cmex interface or in standalone applications. No libraries are required for the standalone code.
TensCalc's user guide can be found online at https://tenscalc.readthedocs.io
A technical description of the algorithms behind TensCalc can be found at https://www.ece.ucsb.edu/~hespanha/published/tenscalc_imp-20170630.pdf
The TensCalc toolbox supports Matlab running under:
To install
Install the FunParTools toolbox.
Install the CmexTools toolbox. This will only succeed after installing FunParTools.
Download TensCalc using one of the following options:
downloading it as a zip file from https://github.com/hespanha/tenscalc/archive/master.zip and unzipping to an appropriate location
cloning this repository with svn, e.g., using the shell command
svn checkout https://github.com/hespanha/tenscalc.git
checking out this repository with Git, e.g., using the shell command
git clone https://github.com/hespanha/tenscalc.git
The latter two options are recommended because you can
subsequently use svn update
or git pull
to upgrade TensCalc
to the latest version.
After this, you should have at least the following folders:
tenscalc
tenscalc/lib
tenscalc/examples
tenscalc/doc
Enter tenscalc
and execute the following command at the Matlab prompt:
install_tenscalc
MATLAB must have write permissions to the folder
tenscalc/lib
This will only succeed after if you have already installed FunParTools and CmexTools (steps 1 and 2 above).
To test if all is well, go to tenscalc/examples
and try a few example, such as
mls
sls
l1l2estimationCS
A few Model Predictive Control (MPC) examples can be found on
tenscalc/examples/mpcmhe
, such as
mpc_dcmotor
mpc_quadcopter
mpc_unicycle_pursuit
mpcmhe_dcmotor
For this to work, MATLAB must have write permissions to the folder
tenscalc/examples
For reasons that I do not yet fully understand, the compilation of the solvers generated by TensCalc takes much longer under Microsoft Windows 10 than under OSX. E.g., each of the solvers generated by sls above takes about 10-20s in OSX, but 170-200s under Windows 10. Nevertheless, the actual solvers run pretty much as fast (last optimization in sls takes about 850us under OSX and 941us under Windows 10). I am running Windows 10 inside a virtual machine (parallels), which could explain slower speed but not a 10x increase in compilation time. Please email me if you have ideas.
TensCalc's user guide can be found online at https://tenscalc.readthedocs.io, but here goes very quick overview.
Tensors are essentially multi-dimensional arrays, but one needs to keep in mind that in Matlab every variable is an array of dimension 2 or larger. However, this is not always suitable for TensCalc, which also needs arrays of dimension 0 (i.e., scalars) and 1 (i.e., vectors). This can create confusion because Matlab automatically “upgrades” scalars and vectors to matrices (by adding singleton dimensions), but this is not done for TensCalc expressions.
The basic objects in TensCalc are symbolic tensor-valued expressions (STVEs). These expressions typically involve symbolic variables that can be manipulated symbolically, evaluated for specific values of its variables, and optimized.
Prior to numerical optimization, STVEs must be “compiled” for efficient computation. This compilation can take a few seconds or even minutes but results in highly efficient Matlab or C code. Big payoffs arise when you need to evaluate or optimize an expression multiple time, for different values of input variables. TensCalc’s compilation functions thus always ask you to specify input parameters. Much more on TensCalc’s compilations tools can be found in CSparse’s documentation.
The following sequence of TensCalc command can be used to declare an STVE to be used in a simple least-squares optimization problem.
N=100; n=8;
Tvariable A [N,n];
Tvariable b N;
Tvariable x n;
y=A*x-b;
J=norm2(y)
To perform an optimization we need to create an appropriate specialized Matlab class (say called "minslsu"), using the following command:
cmex2optimizeCS('classname','minslsc',...
'objective',J,...
'optimizationVariables',{x},...
'constraints',{x>=0,x<=.05},...
'outputExpressions',{J,x},...
'parameters',{A,b},...
'solverVerboseLevel',2);
The goal of this class is to minimize the symbolic expression J
with
respect to the variable x
, subject to the constrains x>=0
and
x<=.05
. The symbolic variables A
and b
are declared as
parameters that can be changed from optimization to
optimization. Setting the solverVerboseLevel
to 3, asks for a
moderate amount of debugging information to be printed while the
solver is executed (one line per iteration of the solver).
Once can see the methods available for the class Cminslsc
generated by cmex2optimizeCS
using the usual
help
command, which produces:
>> help minslsu
% Create object
obj=minslsu();
% Set parameters
setP_A(obj,{[100,8] matrix});
setP_b(obj,{[100,1] matrix});
% Initialize primal variables
setV_x(obj,{[8,1] matrix});
% Solve optimization
[status,iter,time]=solve(obj,mu0,int32(maxIter),int32(saveIter));
% Get outputs
[y1,y2]=getOutputs(obj);
The following commands creates an instance of the class and preforms the optimization for specific parameter values:
thisA=rand(N,n);
thisb=rand(N,1);
x0=.02*rand(n,1);
obj=Cminslsc();
setP_A(obj,thisA);
setP_b(obj,thisb);
setV_x(obj,x0);
mu0=1;
maxIter=20;
[status,iter,time]=solve(obj,mu0,int32(maxIter),int32(-1));
[Jcstar,xcstar]=getOutputs(obj);
The parameters mu0
and maxIter
passed to the solver are the
initial value of the barrier variable and the maximum number of Newton
iterations, respectively.
The example above and many others can be found in tenscalc\examples
.
TensCalc's user guide can be found online at https://tenscalc.readthedocs.io
Additional technical information can be found at
doc/tenscalc.pdf
doc/ipm.pdf
doc/csparse.pdf
doc/computationgraphs.pdf
doc/timeseries.pdf
TensCalc's LU factorization can be performed using Tim Davis'
SuiteSparse toolbox. This generally slows down TensCalc
significantly, but in some cases can improve robustness. The use of
SuiteSparse is enabled by setting the parameter umfpack
to
true
and requires SuiteSparse to be installed and in the path. To
do this
Download SuiteSparse from https://github.com/DrTimothyAldenDavis/SuiteSparse
Install SuiteSparse by executing make library make install
Installation of SuiteSparse requires cmake and Intel MKL BLAS or
OpenBLAS (see SuiteSpase installation notes). On OSX, with OpenBLAS
from mac ports and static libraries, I used
make library LDFLAGS='-L/Users/hespanha/GitHub/tenscalc/SuiteSparse/lib -L/opt/local/lib'
Install the package UMFPACK/MATLAB, by entering the UMFPACK/MATLAB
folder and executing umfpack_make
at the matlab prompt.
Make sure your installation succeeded by executing umfpack_demo
Add UMFPACK/MATLAB to your matlab path and save the path.
While most Matlab scripts are agnostic to the underlying operating
systems (OSs), the use of mex
functions depends heavily on the
operating system.
Our goal is to build a toolbox that works across multiple OSs; at least under OSX, linux, and Microsoft Windows. However, most of our testing was done under OSX so one should expect some bugs under the other OSs. Sorry about that.
Currently the compilation of TensCalc solvers under Microsoft Windows 10 seems to be very slow. It is not clear what causes this.
Getting the solver to converge can be difficult for problems that are numerically ill conditioned, especially with the C code that does not do any numerical conditioning to find the Newton direction.
TensCalc gives very obscure error messages that make it pretty hard to for users to figure out what is wrong with their optimizations.
E.g., if the cost function does not depend on one of the optimization variables, the error message complains that "sparse gradients are not supported" Why? because if the cost function does not depend on one of the variables, then the gradient of the cost function is indeed a vector with some entries that are always zero. In general, TensCalc loves sparse matrices/vectors to make fast computations, but it is not prepared to handle sparse gradients since this should never happen.
The following people greatly helped in testing and improving this toolbox: David Copp, Sharad Shankar, Calvin Wang, Ricard Scott Erwing, Justin Pearson, Raphael Chinchilla, Murat Erdal, Steven Quintero.
This work was partially funded by the National Science Foundation.
Joao Hespanha (hespanha@ece.ucsb.edu)
http://www.ece.ucsb.edu/~hespanha
University of California, Santa Barbara
This file is part of Tencalc.
Copyright (C) 2012-21 The Regents of the University of California (author: Dr. Joao Hespanha). All rights reserved.
See LICENSE.txt