JeanKossaifi / tensorly-notebooks

Tensor methods in Python with TensorLy
429 stars 126 forks source link

little bug in tensor_regression_layer_pytorch.ipynb #13

Closed segalinc closed 3 years ago

segalinc commented 3 years ago

Hi Jean,

just wanted to point out the in the example tensor_regression_layer_pytorch.ipynb in the TRL layer forward pass this line regression_weights = tl.tucker_to_tensor(self.core, self.factors) should instead be regression_weights = tl.tucker_to_tensor((self.core, self.factors)) otherwise you get an error. This happened to me while working on my code using latest version

Let me know if you get it too running the example

JeanKossaifi commented 3 years ago

Hi Christina,

Thanks for reporting, you are completely right! The issue should now have been fixed in 6af351c30928f07eab740a460272f1f2ac6dd5df.

As a side note, we now provide well tested PyTorch Tensor Regression Layers in TensorLy-Torch:

These also support tensor hooks such as tensor dropout or rank regularization (lasso).

Feel free to re-open if you still have the issue!

segalinc commented 3 years ago

Awesome, trying it now ,thanks!

Would you recommend to use rank='same' or idk (512,3,3,output_cls) for the rank for networks like VGG or resnet18?

On Wed, Jan 27, 2021 at 5:47 AM Jean Kossaifi notifications@github.com wrote:

Closed #13 https://github.com/JeanKossaifi/tensorly-notebooks/issues/13.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/JeanKossaifi/tensorly-notebooks/issues/13#event-4255939453, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACHLPEVLGEHC4F3PUOHOT23S4AKNNANCNFSM4WUFTEAA .

JeanKossaifi commented 3 years ago

Based on the tuple you show for rank, I'm assuming this is for Tucker. I would say rank='same' is a good start. Alternatively, rank=kernel.shape should always work well. Ideally you want to reduce the rank to benefit from the low-rank regularization (e.g. rank=0.75).

Depending on the problem, with fine-tuning/retraining, you should be able to reach rank=0.5 without loss of performance (the low-rank regularization may even help generalize better with longer fine-tuning/retraining). Let me know of any feedback you may have! :)