Closed nipunbatra closed 7 years ago
Thanks for the question.
new_kt_to_tensor
doesn't work because, as the hint suggests, indexed assignment into arrays isn't supported in autograd. (kt_to_tensor
probably also won't work because it uses in-place updating via +=
, though it wasn't getting to that error.) See the tutorial for more information, particularly the "don't use" list.
I think the original code is using np.ix_
to implement broadcasting, and it's probably better to use more standard broadcasting constructs here, as the StackOverflow answer suggests, or to use np.einsum
.
I'm going to close this issue because it seems that your problem is solved. A PR to implement np.ix_
support is welcome, though!
Hi, I'm trying to decompose a Tensor (m, n, o) into matrices A(m, r), B (n, r) and C (k, r). This is known as PARAFAC decomposition. Tensorly already does this kind of a decomposition. I'm trying to use autograd for such a decomposition.
The process is simple:
For defining the cost function, we need to get a tensor (m, n, o) from A, B, C. This is known as Khatri product. Tensorly defines this as:
I can define my cost as:
However, computing the gradient (multigradient) over this would give the following error:
I then tried to make the function definition simpler, as follows:
However, gradient could not be computed over this too. I modified cost to use
new_kt_to_tensor
instead ofkt_to_tensor
. Error:Ofcourse, I had checked that both these function definitions return the exact same result.
I was wondering if you could let me know the best way to proceed with this usecase.
I have also asked this StackOverflow question to try and get a
np.tensordot
based solution for this multiplication.