mx-mark / VideoTransformer-pytorch

PyTorch implementation of a collections of scalable Video Transformer Benchmarks.
272 stars 34 forks source link

build_finetune_optimizer raise NotImplementedError #14

Closed aries-young closed 2 years ago

aries-young commented 2 years ago

why build_finetune_optimizer raise NotImplementedError if hparams.arch is not mvit? I use the training command in README to finune ViViT

def build_finetune_optimizer(hparams, model):
    if hparams.arch == 'mvit':
        if hparams.layer_decay == 1:
            get_layer_func = None
            scales = None
        else:
            num_layers = 16
            get_layer_func = partial(get_mvit_layer, num_layers=num_layers + 2)
            scales = list(hparams.layer_decay ** i for i in reversed(range(num_layers + 2)))
    else:
        raise NotImplementedError
mx-mark commented 2 years ago

@aries-young Thanks for your reporting this problem, we will fix it this week.