Closed faroit closed 7 years ago
I would say that simultaneously summing over an index at three places is not a proper contraction in the sense of tensor operations. Also, from the computational point of view, it is really different as your requested operation cannot be computed as a matrix multiplication (i.e. BLAS) and really requires manual for
loops.
I have been thinking about writing a different macro which could accept such expressions and turn them into for
loops, but this really requires a completely different approach and is not a simple modification. While such a macro could handle your use case, for normal contractions (i.e. involving the contracted index on only two places), it would typically be less efficient than the current @tensor
macro.
In order to use this package for your purpose for now, you could define a tensor
I = zeros(100,100,100); for n=1:100; I[n,n,n]=1;end
and then you could write
@tensor V[a,b,c] := A[a,k]*B[b,l]*C[c,m]*I[k,l,m]
though I guess the explicit loops will be more efficient.
@faroit @Jutho - See some (very) preliminary code for solving this problem here: https://github.com/ahwillia/Einsum.jl
Let me know what you think.
Dear @ahwillia. That looks nice indeed. I'll study the code in detail somewhat later, but I certainly would not mind having this in TensorOperations.jl, unless you prefer to publish it as a separate package.
@ahwillia I find this is really useful. This is really great! Currently, I am having some speed issues compared to numpy einsum. But let's move discussion to your package for now.
Thanks for your great effort. I just tried the staged branch (#8) on julia 0.4:
results in
whereas in python/numpy
works.
I guess I did not fully understand the way TensorOperations works. Since the example above is the very popular PARAFAC/CANDECOMP product, it would make sense to include it in the readme.