Closed abalkin closed 8 years ago
Good list. Note that wrapping these numpy calls is typically straightforward, but getting their gradients can be more tricky. First implementations whose grad methods raise NotImplementedError (via the new grad_not_implemented mechanism) would still be useful.
On Wed, Nov 7, 2012 at 10:44 PM, abalkin notifications@github.com wrote:
The following operations from numpy.linalg are not implemented:
Linear algebra basics:
- norm Vector or matrix norm
- lstsq Solve linear least-squares problem
- pinv Pseudo-inverse (Moore-Penrose) calculated using a singular value decomposition [implemented in ops, but not exposed in linalg]
- matrix_power Integer power of a square matrix
Eigenvalues and decompositions:
- eig Eigenvalues and vectors of a square matrix
- eigh Eigenvalues and eigenvectors of a Hermitian matrix
- eigvals Eigenvalues of a square matrix
- eigvalsh Eigenvalues of a Hermitian matrix
- qr QR decomposition of a matrix
- svd Singular value decomposition of a matrix
Tensor operations:
- tensorsolve Solve a linear tensor equation
tensorinv Calculate an inverse of a tensor
— Reply to this email directly or view it on GitHubhttps://github.com/Theano/Theano/issues/1057.
Does sympy have implementations we could re-use?
On Wed, Nov 7, 2012 at 11:14 PM, James Bergstra james.bergstra@gmail.comwrote:
Good list. Note that wrapping these numpy calls is typically straightforward, but getting their gradients can be more tricky. First implementations whose grad methods raise NotImplementedError (via the new grad_not_implemented mechanism) would still be useful.
On Wed, Nov 7, 2012 at 10:44 PM, abalkin notifications@github.com wrote:
The following operations from numpy.linalg are not implemented:
Linear algebra basics:
- norm Vector or matrix norm
- lstsq Solve linear least-squares problem
- pinv Pseudo-inverse (Moore-Penrose) calculated using a singular value decomposition [implemented in ops, but not exposed in linalg]
- matrix_power Integer power of a square matrix
Eigenvalues and decompositions:
- eig Eigenvalues and vectors of a square matrix
- eigh Eigenvalues and eigenvectors of a Hermitian matrix
- eigvals Eigenvalues of a square matrix
- eigvalsh Eigenvalues of a Hermitian matrix
- qr QR decomposition of a matrix
- svd Singular value decomposition of a matrix
Tensor operations:
- tensorsolve Solve a linear tensor equation
tensorinv Calculate an inverse of a tensor
— Reply to this email directly or view it on GitHubhttps://github.com/Theano/Theano/issues/1057.
Note that (somewhat hilariously, to me anyway) the gradient of a Cholesky factor with respect to its input is implemented care of yours truly, although the implementation is right now a nested Python loop and thus incredibly slow. In the implementation I provide a link to the publication where I came across the algorithm for it. It may contain gradient algorithms for other decompositions.
On Wed, Nov 7, 2012 at 11:15 PM, James Bergstra notifications@github.comwrote:
Does sympy have implementations we could re-use?
On Wed, Nov 7, 2012 at 11:14 PM, James Bergstra james.bergstra@gmail.comwrote:
Good list. Note that wrapping these numpy calls is typically straightforward, but getting their gradients can be more tricky. First implementations whose grad methods raise NotImplementedError (via the new grad_not_implemented mechanism) would still be useful.
On Wed, Nov 7, 2012 at 10:44 PM, abalkin notifications@github.com wrote:
The following operations from numpy.linalg are not implemented:
Linear algebra basics:
- norm Vector or matrix norm
- lstsq Solve linear least-squares problem
- pinv Pseudo-inverse (Moore-Penrose) calculated using a singular value decomposition [implemented in ops, but not exposed in linalg]
- matrix_power Integer power of a square matrix
Eigenvalues and decompositions:
- eig Eigenvalues and vectors of a square matrix
- eigh Eigenvalues and eigenvectors of a Hermitian matrix
- eigvals Eigenvalues of a square matrix
- eigvalsh Eigenvalues of a Hermitian matrix
- qr QR decomposition of a matrix
- svd Singular value decomposition of a matrix
Tensor operations:
- tensorsolve Solve a linear tensor equation
- tensorinv Calculate an inverse of a tensor
— Reply to this email directly or view it on GitHub< https://github.com/Theano/Theano/issues/1057>.
— Reply to this email directly or view it on GitHubhttps://github.com/Theano/Theano/issues/1057#issuecomment-10175350.
[image: Web Bug from https://github.com/notifications/beacon/J6T91GIPIyhU-8ti4GCGPyC60RZ_08dBF1RFh49nH6AD1m7s-QAfgeBkIosWIAmQ.gif]
I made some progress implementing gradient of eigh() in my eig-grad branch. I need to make it work correctly for complex eigenvectors before it is ready for a PR. Gradient of eigvalsh() is a straightforward special case.
Good. Currently we do not support the gradient of complex. @lamblin make a proposal that describe how we could support them, but we do not for now:
https://github.com/Theano/Theano/blob/master/doc/proposals/complex_gradient.txt
Also, we don't have time to work on complex grad in the short term, so I hope this is not something you need. If you need it, tell us, we can probably find time to guide/help you.
I only need support for eigensystems of real symmetric matrices, so your roadmap fits my needs well. Lack of complex support precludes implementation of grad for eig(), so that will have to wait. I'll clean up docstings and issue a pull request from eig-grad branch.
I updated the ticket to add that norm now in numpy support keepdims. We should also support it.
@nouiz I want to work on this problem. Where should I start?
start by finding the op you want to add. Then grep in theano and google about it to know if it was added in Theano or if done by someone else but done outside of Theano.
As a first contribution, the last one would be a good start:
norm now support keepdims parameter: https://github.com/numpy/numpy/pull/5196/files
mostly, add the keepdims param to theano norm method.
@nouiz , I would like to work on "tensorinv Calculate an inverse of a tensor". Is anyone else working on that ?
It have been started at:
https://github.com/Theano/Theano/pull/1973/files
Do you want to finish it? It is probably close to be done, but we don't have too much time to review this this week and next week. But if you feel confortable with that part of theano, you can try it.
On Thu, Mar 31, 2016 at 6:47 PM, Ramana Subramanyam < notifications@github.com> wrote:
@nouiz https://github.com/nouiz , I would like to work on "tensorinv Calculate an inverse of a tensor". Is anyone else working on that ?
— You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub https://github.com/Theano/Theano/issues/1057#issuecomment-204162027
Sure. I'll fork that work and see what is yet to be done and would comment there if I have any questions. I'll try to finish this up by this weekend and will remind you 2 at the end of next week for reviewing. Thanks
I will take tensorsolve.
@Sentient07 , are you working on the tensorinv or not?
@iikulikov , I'm not currently working on that. I've started my GSoC project and wasn't able to finish this before. You can take go ahead with this if you want.
@Sentient07 ,okay, thanks for the info, then I will work on tensorinv
tensorinv is now merged.
Done
The following operations from numpy.linalg are not implemented:
Linear algebra basics:
Eigenvalues and decompositions:
Tensor operations:
NumPy development version update: