Open AtomicCactus opened 1 year ago
I think we just need to make sure that on line #42
in svd.py
the tl.ones(tl.shape(V)[0] - tl.shape(U)[1])
tensor is moved to the same device as the signs
tensor:
Good catch thanks @AtomicCactus! Could you open a small PR to fix the issue? Perhaps we could test this kind of issue, at least by changing the dtype of the input tensor in the future.
Sure thing! https://github.com/tensorly/tensorly/pull/504 I didn't add a unit test for this, since I'm not sure if any CI/CD pipeline or other system that runs those has a GPU.
Thanks @AtomicCactus! I reviewed the PR -- for test we could look at dtype rather than device. Our current CI pipeline doesn't have GPU support.
Describe the bug
Decomposing a 2D tensor along both modes, while specifying two ranks results in an error because internally there's a tensor created on the CPU as part of the process, which cannot be concatenated with the GPU.
Works fine when the rank is specified as an integer, but not as a list:
rank=16
worksrank=[16,16]
crashesWorks fine on the CPU, but performance is not the same.
Steps or Code to Reproduce
Expected behavior
Tucker decomposition should not fail when ranks are provided as an array of values.
Actual result
Versions