ckkelvinchan / RealBasicVSR

Official repository of "Investigating Tradeoffs in Real-World Video Super-Resolution"
Apache License 2.0
900 stars 134 forks source link

Questions about the inference code (CPU or GPU) #73

Closed soohwanlim closed 1 year ago

soohwanlim commented 1 year ago

In inference_realbasicvsr.py

    if torch.cuda.is_available():
        model = model.cuda()
        cuda_flag = True

    with torch.no_grad():
        if isinstance(args.max_seq_len, int):
            outputs = []
            for i in range(0, inputs.size(1), args.max_seq_len):
                imgs = inputs[:, i:i + args.max_seq_len, :, :, :]
                if cuda_flag:
                    imgs = imgs.cuda()
                outputs.append(model(imgs, test_mode=True)['output'].cpu())
            outputs = torch.cat(outputs, dim=1)
        else:
            if cuda_flag:
                inputs = inputs.cuda()
            outputs = model(inputs, test_mode=True)['output'].cpu()

At outputs.append(model(imgs, test_mode=True)['output'].cpu())`

Can you tell me what .cpu() is? When I ran it, it was output as "cuda_flag = True, torch.cuda.is_available() = True" and I wanted to know if it was right to run it with gpu, but I doubt it was influenced by cpu because it had '.cpu().'

Thank you.