Open ghost opened 4 years ago
In your example, index J is contracted. Hence, the result tensor has only two indices remaining: I, and K. Removing J from OIndex should fix the issue.
Explicit Einstein summation that NumPy has, is not supported in Fastor currently. There is already discussion on how to support this (see #91). One way around this issue for now is to not contract the index that you want to retain that is
auto res = Fastor::einsum<Fastor::Index<I,M>,Fastor::Index<N,K>, Fastor::OIndex<I,M,N,K>>(t1, t2);
Now your problem is narrowed down to summing the M
and N
dimensions using sum
or diag
function. The sum
function in Fastor does not support summing along an arbitrary axis at the moment but this is planned. So perhaps for now you can do that step manually.
Explicit Einstein summation that NumPy has, is not supported in Fastor currently. There is already discussion on how to support this (see #91). One way around this issue for now is to not contract the index that you want to retain that is
auto res = Fastor::einsum<Fastor::Index<I,M>,Fastor::Index<N,K>, Fastor::OIndex<I,M,N,K>>(t1, t2);
Now your problem is narrowed down to summing the
M
andN
dimensions usingsum
ordiag
function. Thesum
function in Fastor does not support summing along an arbitrary axis at the moment but this is planned. So perhaps for now you can do that step manually.
Hi, thanks for the response. I am not sure what the summation process over M and N would look like in order to produce 'res'.
It looks like the following code:
auto res = Fastor::einsum<Fastor::Index<I,M>,Fastor::Index<N,K>, Fastor::OIndex<I,M,N,K>>(t1, t2);
Will produce additional matrices of no interest and therefore in order to get to 'res' I would need to filter out these indices.
In python this could be done with two einsum calls:
res = np.einsum('IM,NK->IMNK', t1, t2)
res2 = np.einsum('IMMK->IMK', res)
Hence in this case instead of summing over the contracted indices, you would place the output of the product of "ij,jk->ijk' in a tensor like so:
Fastor::Tensor<double, 3,2,2> res; for i = 0...3 for j = 0..2 for k = 0..2 res(i,j,k) = t1(i,j) * t2(j,k);
Is there a better way to achieve this?
Thanks.
Hi,
I am currently using Fastor to perform tensor contraction and I am unsure if it supports my example usage.
For example, given this Python code:
The result would produce a tensor of shape [3,2,2], which looks like:
Trying to replicate this behaviour in Fastor, I wrote the following code:
And this gives an error:
Is this supported?
Thanks.