Open LukGross opened 6 months ago
The straight forward TT-decomposition of a full tensor does not work properly for me.
Minimal example:
import tntorch as tn import torch import numpy as np X, Y, Z = np.meshgrid(range(128), range(128), range(128)) full = torch.Tensor( np.sqrt(np.sqrt(X) * (Y + Z) + Y * Z**2) * (X + np.sin(Y) * np.cos(Z)) ) # Some analytical 3D function print(full.shape) t = tn.Tensor(full, ranks_tt=3, requires_grad=True) # You can also pass a list of ranks def metrics(): print(t) print( "Compression ratio: {}/{} = {:g}".format( full.numel(), t.numel(), full.numel() / t.numel() ) ) print("Relative error:", tn.relative_error(full, t)) print("RMSE:", tn.rmse(full, t)) print("R^2:", tn.r_squared(full, t)) metrics()
Output:
torch.Size([128, 128, 128]) 3D TT tensor: 128 128 128 | | | (0) (1) (2) / \ / \ / \ 1 3 3 1 Compression ratio: 2097152/2097152.0 = 1 Relative error: tensor(0.0005, grad_fn=<DivBackward0>) RMSE: tensor(22.0728, grad_fn=<DivBackward0>) R^2: tensor(1.0000, grad_fn=<RsubBackward1>)
The expected output would be the one given in the tutorial. Especially, compression ratio should be $>0$.
>0
I experience this behavior both with python 3.9.6 and 3.12.2 on an M1 MacBook under macOS Sonoma 14.4.1
same problem under ubuntu 22, python 3.9.19
The straight forward TT-decomposition of a full tensor does not work properly for me.
Minimal example:
Output:
The expected output would be the one given in the tutorial. Especially, compression ratio should be $
>0
$.I experience this behavior both with python 3.9.6 and 3.12.2 on an M1 MacBook under macOS Sonoma 14.4.1