Phoenix1327 / tea-action-recognition

The Pytorch code of the TEA module (Temporal Excitation and Aggregation for Action Recognition)
191 stars 31 forks source link

error when run code with pytorch 1.5.0 #5

Open littlefisherfisher opened 3 years ago

littlefisherfisher commented 3 years ago

Error: "Legacy autograd function with non-static forward method is deprecated. " How to fix this, thanks.

meifish commented 3 years ago

The quick way is to create a new virtual environment with Pytorch 1.2, and run it under that environment. If you really want to fix the legacy autograd class, this post can give you a hint of how to change to new style: https://discuss.pytorch.org/t/custom-autograd-function-must-it-be-static/14980/2

Alvintan0712 commented 3 years ago

The TSN model didn't use the torch.autograd.Function, I can't use the @staticmethod to solve it. I think the problems come from autograd.Variable, anyone already solved it?

def train(train_loader, model, criterion, optimizer, epoch, log):
    batch_time = AverageMeter()
    data_time = AverageMeter()
    losses = AverageMeter()
    top1 = AverageMeter()
    top5 = AverageMeter()

    if args.no_partialbn:
        model.module.partialBN(False)
    else:
        model.module.partialBN(True)

    # switch to train mode
    model.train()

    end = time.time()
    for i, (input, target) in enumerate(train_loader):
        # measure data loading time
        data_time.update(time.time() - end)

        target = target.cuda()
        input_var = torch.autograd.Variable(input)
        target_var = torch.autograd.Variable(target)

        print(input_var)

        output = model(input_var)
        loss = criterion(output, target_var)
        prec1, prec5 = accuracy(output.data, target, topk=(1, 5))

        losses.update(loss.item(), input.size(0))
        top1.update(prec1.item(), input.size(0))
        top5.update(prec5.item(), input.size(0))

        # torch.autograd.set_detect_anomaly(True)
        loss.backward()

        if args.clip_gradient is not None:
            total_norm = clip_grad_norm_(model.parameters(), args.clip_gradient)

        optimizer.step()
        optimizer.zero_grad()

        batch_time.update(time.time() - end)
        end = time.time()

        if i % args.print_freq == 0:
            output = ('Epoch: [{0}][{1}/{2}], lr: {lr:.5f}\t'
                      'Time {batch_time.val:.3f} ({batch_time.avg:.3f})\t'
                      'Data {data_time.val:.3f} ({data_time.avg:.3f})\t'
                      'Loss {loss.val:.4f} ({loss.avg:.4f})\t'
                      'Prec@1 {top1.val:.3f} ({top1.avg:.3f})\t'
                      'Prec@5 {top5.val:.3f} ({top5.avg:.3f})'.format(
                epoch, i, len(train_loader), batch_time=batch_time,
                data_time=data_time, loss=losses, top1=top1, top5=top5,
                lr=optimizer.param_groups[-1]['lr'] * 0.1))
            print(output)
            log.write(output + '\n')
            log.flush()
meifish commented 3 years ago

A while ago I attempted to fix that with the autograd static method by changing the SegmentConsensus class to torch.autograd.Function inherited. I basically just changed the basic_ops.py, and themodels.py uses the forward/backward in SegmentConsensus to autograd.

Please note that I didn't dive deep in the TEA model architecture details of this project, my attempt was more of changing a legacy autograd in Pytorch to adopt the new style autograd which can run on Pytorch 1.3 onward. Please verify yourself to see if it conforms your need.

https://github.com/meifish/tea-action-recognition-patch

Alvintan0712 commented 3 years ago

Thanks for the reply, the problem is solved.