tensorly / torch

TensorLy-Torch: Deep Tensor Learning with TensorLy and PyTorch
http://tensorly.org/torch/
BSD 3-Clause "New" or "Revised" License
74 stars 19 forks source link

Contiguous Tucker core and factors #8

Closed colehawkins closed 3 years ago

colehawkins commented 3 years ago

This line of the Tucker Tensor reads https://github.com/tensorly/torch/blob/caaed8a16b30c01b9da0d64fb1ae211a0e62b46d/tltorch/factorized_tensors/factorized_tensor.py#L190 I believe core.contiguous should be core.contiguous() and this is just a small typo.

Also, the factors are not necessarily contiguous, which raised an error for my use case even after correcting the typo above. I think nn.Parameter(f.contiguous()) is a natural replacement, and I don't see any downsides. In sum, replace the line above with

return cls(nn.Parameter(core.contiguous()), [nn.Parameter(f.contiguous()) for f in factors])

I can submit this as a PR, but haven't done a public/tested PR before, so might need help contributing after my initial PR.

JeanKossaifi commented 3 years ago

Thanks, you're absolutely right! I don't think we need to call contiguous on the factors though but I guess we may as well :)

You can just make the change in your local cloned repository, push it to your github repo and from there there is a GUI to open a pull request (should be contribute-> open a pull request). Feel free to ping me on the TensorLy Slack too!

colehawkins commented 3 years ago

Thanks! PR submitted.

I was getting non-contiguous factors (observed by printing .is_contiguous()) after reshaping a tensor and initializing using from_tensor.

The tensor I passed in was x.reshape(dims). If this is a potential upstream issue I can dig a bit.

JeanKossaifi commented 3 years ago

Thanks @colehawkins - better safe than sorry so great to call .contiguous on them too - though it is strange that the factors end up non-contiguous given that they're just matrices.. How do you obtain them?

colehawkins commented 3 years ago

Unfortunately I can't reproduce it. First I reshape the weight using reshaping code from this TT-embedding repo and then factorize. My only guess is that the factors could pick up a transpose somewhere, which would lead to the final assertion below failing.

This script runs fine though, so closing the issue.

import torch
from torchvision import models
import tensorly
from tensorly.decomposition import tucker
from tensorly import tucker_tensor
import numpy as np
from scipy.stats import entropy
from sympy.utilities.iterables import multiset_partitions
from sympy.ntheory import factorint
from itertools import cycle, islice
tensorly.set_backend('pytorch')

MODES = ['ascending', 'descending', 'mixed']
CRITERIONS = ['entropy', 'var']

def _to_list(p):
    res = []
    for k, v in p.items():
        res += [k, ] * v
    return res

def _roundrobin(*iterables):
    "roundrobin('ABC', 'D', 'EF') --> A D E B F C"
    # Recipe credited to George Sakkis
    pending = len(iterables)
    nexts = cycle(iter(it).__next__ for it in iterables)
    while pending:
        try:
            for next in nexts:
                yield next()
        except StopIteration:
            pending -= 1
            nexts = cycle(islice(nexts, pending))

def _get_all_factors(n, d=3, mode='ascending'):
    p = _factorint2(n)
    if len(p) < d:
        p = p + [1, ] * (d - len(p))

    if mode == 'ascending':
        def prepr(x):
            return tuple(sorted([np.prod(_) for _ in x]))
    elif mode == 'descending':
        def prepr(x):
            return tuple(sorted([np.prod(_) for _ in x], reverse=True))

    elif mode == 'mixed':
        def prepr(x):
            x = sorted(np.prod(_) for _ in x)
            N = len(x)
            xf, xl = x[:N//2], x[N//2:]
            return tuple(_roundrobin(xf, xl))

    else:
        raise ValueError('Wrong mode specified, only {} are available'.format(MODES))

    raw_factors = multiset_partitions(p, d)
    clean_factors = [prepr(f) for f in raw_factors]
    clean_factors = list(set(clean_factors))
    return clean_factors

def _factorint2(p):
    return _to_list(factorint(p))

def auto_shape(n, d=3, criterion='entropy', mode='ascending'):
    factors = _get_all_factors(n, d=d, mode=mode)
    if criterion == 'entropy':
        weights = [entropy(f) for f in factors]
    elif criterion == 'var':
        weights = [-np.var(f) for f in factors]
    else:
        raise ValueError('Wrong criterion specified, only {} are available'.format(CRITERIONS))

    i = np.argmax(weights)
    return list(factors[i])

D = 3
RANK = 10
model = models.mobilenet_v3_small()

for module in model.modules():
    if type(module)==torch.nn.Conv2d:

        to_factorize = module.weight
        new_shape = auto_shape(np.prod(module.weight.shape),d=D)

        reshaped = module.weight.reshape(new_shape)

        rank = D*[RANK]

        core,factors = tucker(reshaped,rank=rank)

        for factor in factors:
            print(factor.shape)
            assert factor.is_contiguous()