I'm using version 0.4.0 of tensorly-torch and am running into an issue with non-contiguous inputs to a FactorizedEmbedding layer raising a runtime error. Here's a minimum working example:
import torch
from tltorch.factorized_layers import FactorizedEmbedding
embedding = FactorizedEmbedding(32, 16)
data = torch.randint(0, 32, (2, 3))
embedding(data.T)
Running this yields:
Traceback (most recent call last):
File "tltorch_bug_mwe.py", line 6, in <module>
embedding(data.T)
File "/Users/jemis/opt/miniconda3/envs/tensorized3.8/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/Users/jemis/opt/miniconda3/envs/tensorized3.8/lib/python3.8/site-packages/tltorch/factorized_layers/factorized_embedding.py", line 100, in forward
flatenned_input = input.view(-1)
RuntimeError: view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead.
Replacing input.view(-1) with input.reshape(-1) does indeed resolve this.
I'm using version 0.4.0 of tensorly-torch and am running into an issue with non-contiguous inputs to a
FactorizedEmbedding
layer raising a runtime error. Here's a minimum working example:Running this yields:
Replacing
input.view(-1)
withinput.reshape(-1)
does indeed resolve this.