Open DirkToewe opened 5 years ago
@DirkToewe Just to gather requirements: do you plan to use these linear algebra features in Node.js or browser?
For me that would be the Browser mostly. Outside the Browser there are many alternatives (TF4Py, PyTorch, ...) while inside the Browser I see none. My focus is on having either QR or Cholesky running on the GPU (whichever is faster, which is likely Cholesky but it could be QR because there is more potential for parallelism).
My application would be the compression of 3D model data. It has nothing to do with Neural Nets (yet) but it is highly interesting none the less (at least to me).
@DirkToewe Thanks for putting this together and collecting all the issues and use cases in one place (and for the PRs!). We are going to discuss in our next team sync and try and respond to this more fully by the end of next week. I think at the moment we are leaning towards figuring out how to make this happen (given our current bandwidth).
@DirkToewe We had a chance to talk about this and the summary is that we would like to support more linalg functions but don't have the bandwidth to review all the functionality currently in tf.linalg.
However we think we can do some work to list the ops we would want to review PRs for based on broad applicability/use case. @dsmilkov will follow-up with some concrete thoughts on op prioritization and review process to get a subset of tf.linalg to get into tfjs. Thanks for the consolidated links to the existing PRs.
It's good to hear that You're committed to extending tf.linalg
and I realize how much review work is involved. The past few weeks, I've been trying to think of ways to reduce that work. So far, I've come up with two ideas:
One Idea would be to port Lapack to the Web using WebAssembly. Lapack is thoroughly tested, so the review work could be limited to the backpropagation implementation and Wasm. Sadly, I will be of no help there, due to my lack of knowledge on Fortran, Wasm, and JS build tools. Maybe there is some other community member who could do it?
The other idea is that I can offer to give a presentation or prepare a video about the Algorithm(s) used in a particular implementation, that You're interested in.
Let me know if there is anything else I can do to help.
Another source of the Python stack is the iodide/pyodide project using wasm to port Python & numpy and friends to the browser. JavaScript access is also available: https://github.com/iodide-project/pyodide/blob/master/docs/using_pyodide_from_javascript.md
One minor downside of Python→JS ports is that they usually come with ginormous package sizes. Pyodide is ~12mb if I see it correctly (~15 times the size of tf.min.js
).
For comparison: EmLapack is ~0.2mb.
Just to add to @DirkToewe's list, tf.linalg.det
would be really useful in TFJS as well.
Added it to the list :)
If the matrices are small, You can already compute the determinant using the current implementation of the QR decomposition:
function det( /*tf.Tensor*/a ) {
const a_shape = a.shape,
[m,n] = a_shape.slice(-2);
if( m !== n )
throw new Error('det(): Matrix/matrices must be square.');
return tf.stack(
a.reshape([-1,n,n]).unstack().map( a => {
const [q,r] = tf.linalg.qr(a),
diag_r = r.flatten().stridedSlice([0],[n*n],[n+1]),
det_r = diag_r.prod(),
det_q = n%2===0 ? +1 : -1; // <- q is product of n Householder
// reflections, i.e. det(q) = (-1)^n
return tf.mul(det_q, det_r);
})
).reshape( a_shape.slice(0,-2) );
}
@DirkToewe Thanks for that implementation. My matrices are indeed small. I refactored your code a little and ran some tests which had round-off errors. Tried out different dtype
s but that didn't help. Is this unavoidable with tf.linalg.qr
?
const tf = require(`@tensorflow/tfjs`)
const det = tnsr => {
const [n, m, ...dims] = tnsr.shape.reverse()
if (m !== n) throw new Error(`det(): Received non-square matrix.`)
const mats = tnsr.reshape([-1, n, n]).unstack()
const dets = mats.map(mat => {
const r = tf.linalg.qr(mat)[1]
const diag_r = r.flatten().stridedSlice([0], [n * n], [n + 1])
const det_r = diag_r.prod()
// q is product of n Householder reflections, i.e. det(q) = (-1)^n
const det_q = n % 2 === 0 ? 1 : -1
return tf.mul(det_q, det_r)
})
return tf.stack(dets).reshape(dims)
}
const t1 = tf.tensor([[1, 2], [3, 4]], undefined, `int32`)
const t2 = tf.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]], undefined, `int32`)
console.log(det(t1).arraySync()) // -1.9999994039535522
console.log(det(t2).arraySync()) // -0.000006132030193839455
Also, tf.linalg.inv
should probably be on the list as well.
@janosh Added it to the list as well.
TFJS - sadly - only supports float32
. And float32
is scarily inaccurate. The machine epsilon is only ~1.1920928955078125e-7
, i.e. the next larger number after 4
is ~4.000000476837158
. Considering that, the results seem to be within the margin of error.
tf.linalg.lstsq
would be great as this would support triangulation of rays or 2d points with known camera parameters into 3d, as is done in tensorflow graphics ray.triangulate function
Any ETA on this issue?
Any ETA on this issue?
I came here to ask this too...
Hi, @DirkToewe
Thank you for opening this issue. Since this issue has been open for a long time, the code/debug information for this issue may not be relevant with the current state of the code base.
The TFJs team is constantly improving the framework by fixing bugs and adding new features. We suggest you try the latest TFJs version with the latest compatible hardware configuration which could potentially resolve the issue. If you are still facing the issue, please create a new GitHub issue with your latest findings, with all the debugging information which could help us investigate.
Please follow the release notes to stay up to date with the latest developments which are happening in the Tensorflow.js space.
Thank you for your support and cooperation.
Hi @gaikwadrahul8,
It has indeed been a while since I last checked up on this issue, so thank You for reminding me. Looking at the API, tf.linalg
still seems to offer the same three operations as last time:
The latter two of which still seem to use the same high level implementation that comes with the same significant overhead as before. So this is still very much an open feature request.
TFJS is a neural net library not a linear algebra library, so there are likely many other issues and features that understandably take precedence over this feature request.
Other than that, I am not sure there is anything new I can report or contribute.
Would love to see this as well. Theres really no good alternative to numpy in JS from what I can tell and tensorflow is by far the closest, despite not even being primarily a math library.
Backpropagatable Linear Algebra methods are a vital part of Machine Learning including but not limited to Neural Nets. Here and there, requests for different LA methods pop up. More and more duplicate PRs regarding LA are submitted. This issue is an attempt to concentrate and coordinate the efforts and - if possible - establish a roadmap. Another motivation for this issue is this unanswered question regarding the future of
tf.linalg
.The following methods seem to be particularly useful:
tf.linalg.qr
is implemented but very slow. Backpropagation is not supported.tf.linalg.qrSolve
is missing.cholesky
andcholeskySolve
are missing.triangularSolve
It is my impression that the TFJS team is currently too busy to focus on Linear Algebra which is perfectly understandable considering how quickly TFJS is developing and progressing. On top of that the PRs (especially mine) may not yet satisfy the TFJS standards. However without feedback that is hard to fix.
Would it be possible to add
tf.linalg
as a future milestone to TFJS? If there are no intentions to add moretf.linalg
methods totfjs-core
, would it be possible to initiate a newtfjs-linalg
sub-project for this purpose?