Closed jarlebring closed 3 years ago
I see that you managed to avoid code repetition by putting crucial parts of opt_gaussnewton
in separate functions. Nice work! (I deleted a redundant comment here.)
For question 5: Do we mean like minimizing |Ax-b|_1, or some type of l_1 regularization of the classical least squares problem?
Also: Do we want to try optimize it for the case of monomials? The generation of the matrix can possibly be done more efficiently, but I did not think it was worth to consider it for the initial implementation.
For question 5: Do we mean like minimizing |Ax-b|_1, or some type of l_1 regularization of the classical least squares problem?
I mean the situation we start with in our manuscript: max_D |p(x)-g(x)|
. When p(x) has the linear structure maybe it is possible to get closer to that, rather than first relaxing to the 2-norm.
Also: Do we want to try optimize it for the case of monomials? The generation of the matrix can possibly be done more efficiently, but I did not think it was worth to consider it for the initial implementation.
Only optimize what needs to be optimized. Is this computationally demanding?
Also: Do we want to try optimize it for the case of monomials? The generation of the matrix can possibly be done more efficiently, but I did not think it was worth to consider it for the initial implementation.
Only optimize what needs to be optimized. Is this computationally demanding?
I don't think so. We are currently evaluating the graph once for the discretized points in order to create the Vandermonde-type matrix. I guess creating a proper Vandermonde matrix may be quicker, but I think we skip it then.
For question 5: Do we mean like minimizing |Ax-b|_1, or some type of l_1 regularization of the classical least squares problem?
I mean the situation we start with in our manuscript:
max_D |p(x)-g(x)|
. When p(x) has the linear structure maybe it is possible to get closer to that, rather than first relaxing to the 2-norm.
Ok. Not sure how to do that. I have to look into it. Perhaps something like this: https://epubs.siam.org/doi/abs/10.1137/0715015
Seems like the l1-fitting is also known as Least absolute deviations. Moreover, there are already a Julia package (LinRegOutliers) that implements it. We could also look into the JuMP package directly. Or is that too many dependencies?
No l1-optimization for now. We can open a new issue if we want to implement it later.
A graph which can be expressed as
where
c1,...,ck
are coefficients are particularly easy to optimize. They
-coefficients ingen_general_poly_recursion
(BBC-recursion) appear in that way. The other coefficients are not linear in that sense, but it can still be useful to treat these separately, since we can treat at least those coefficients perfectly.They are easy to optimize since if we wish to fit in pointz
z1,...,z2
we obtain a linear least squares problem. The setup and solution of this linear least squares problem is in the Dropbox codegenerators/monomial.jl
functionget_coeffs_linear_fit(discr,f,n0=size(discr,1);structure=:none,errtype=:abserr)
. It's a bad place and bad name and not good parameters. It needs clean up.opt_linear_fit!(graph,discr,f,linear_crefs)
opt_linear_fit.jl
opt_gaussnewton!
?