joeyballentine / Video-Inference

Easy inference for video networks. Currently supports SOFVSR (traiNNer Version), RIFE, and TecoGAN-pytorch
55 stars 1 forks source link

fp16 mode? #4

Closed Sazoji closed 3 years ago

Sazoji commented 3 years ago

Could an argument be passed to utilize a halftensor or an AMP mode to reduce vram usage?

joeyballentine commented 3 years ago

Yes, that is definitely a possibility. It actually would be really easy to add.

joeyballentine commented 3 years ago

Would you mind testing the fp16 branch I just pushed? I'm not 100% sure if the step I added all that is necessary to get it working. All you need to do is use the --fp16 argument and it theoretically should work

Sazoji commented 3 years ago

I'm getting:

Traceback (most recent call last):
  File "run.py", line 86, in <module>
    main()
  File "run.py", line 79, in main
    model.inference(LR_list, args)
  File "D:\BasicSR2\VideoInference2\utils\model_classes\SOFVSR_model.py", line 131, in inference
    _, _, _, fake_H = self.model(LR.to(self.device))
  File "C:\Users\Matt\anaconda3\envs\basicSR\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
  File "D:\BasicSR2\VideoInference2\utils\architectures\SOFVSR_arch.py", line 61, in forward
    optical_flow_L1, optical_flow_L2, optical_flow_L3 = self.OFR(torch.cat(input, 0))
  File "D:\BasicSR2\VideoInference2\utils\architectures\SOFVSR_arch.py", line 152, in __call__
    optical_flow_L1 = self.RNN2(self.RNN1(input_L1))
  File "C:\Users\Matt\anaconda3\envs\basicSR\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
  File "C:\Users\Matt\anaconda3\envs\basicSR\lib\site-packages\torch\nn\modules\container.py", line 119, in forward
    input = module(input)
  File "C:\Users\Matt\anaconda3\envs\basicSR\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
  File "C:\Users\Matt\anaconda3\envs\basicSR\lib\site-packages\torch\nn\modules\conv.py", line 399, in forward
    return self._conv_forward(input, self.weight, self.bias)
  File "C:\Users\Matt\anaconda3\envs\basicSR\lib\site-packages\torch\nn\modules\conv.py", line 395, in _conv_forward
    return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.cuda.HalfTensor) should be the same

--fp16 also is asking for an additional argument like --input would

joeyballentine commented 3 years ago

Ok I know what I need to do, this will be slightly more complicated to add than I thought. Should be able to finish it tomorrow

joeyballentine commented 3 years ago

I just pushed an update to the fp16 branch if you wouldn't mind testing it for me @mjc619

Sazoji commented 3 years ago

nice and fast, ram usage is only ~20% lower but the speed is many times faster! thank you!

joeyballentine commented 3 years ago

Awesome, glad to hear. It ended up being simpler than I thought, I've just been busy so I didn't get around to it until just now. Sorry I didn't get to it sooner.

mirh commented 2 years ago

Still getting the problem, at least with TGAN.

  File "<root>\run.py", line 86, in <module>
    main()
  File "<root>\run.py", line 79, in main
    model.inference(LR_list, args)
  File "<root>\utils\model_classes\TecoGAN_model.py", line 71, in inference
    hr_curr = self.model.forward(lr_curr, self.lr_prev, self.hr_prev)
  File "<root>\utils\architectures\TecoGAN_arch.py", line 285, in forward
    lr_flow = self.fnet(lr_curr, lr_prev)
  File "torch\nn\modules\module.py", line 1110, in _call_impl
    return forward_call(*input, **kwargs)
  File "<root>\utils\architectures\TecoGAN_arch.py", line 184, in forward
    out = self.encoder1(torch.cat([x1, x2], dim=1))
  File "torch\nn\modules\module.py", line 1110, in _call_impl
    return forward_call(*input, **kwargs)
  File "torch\nn\modules\container.py", line 141, in forward
    input = module(input)
  File "torch\nn\modules\module.py", line 1110, in _call_impl
    return forward_call(*input, **kwargs)
  File "torch\nn\modules\conv.py", line 447, in forward
    return self._conv_forward(input, self.weight, self.bias)
  File "torch\nn\modules\conv.py", line 443, in _conv_forward
    return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.cuda.HalfTensor) should be the same