Open dcastleproject opened 5 months ago
me too
me too
I managed to solve it, if you are using the VHS node to load the video, force the video size to its value. It worked for me and I no longer got the error
Tried forcing the video into the original size, still getting the error message with Film VFI (see below). Also got a bunch of error messages when installing. But Rife VFI worked for me.
The following operation failed in the TorchScript interpreter. Traceback of TorchScript, serialized code (most recent call last): File "code/torch/feature_extractor.py", line 16, in forward capped_sub_levels = ops.prim.min(_0, sub_levels) extract_sublevels = self.extract_sublevels _1 = (extract_sublevels).forward(image_pyramid[i], capped_sub_levels, )
_2 = torch.append(sub_pyramids, _1)
feature_pyramid = annotate(List[Tensor], [])
File "code/__torch__/feature_extractor.py", line 50, in forward
_2 = getattr(convs, "2")
_3 = getattr(convs, "3")
head = (_0).forward(image, )
~~~~~~~~~~~ <--- HERE
_8 = torch.append(_7, head)
if torch.lt(0, torch.sub(n, 1)):
File "code/__torch__/torch/nn/modules/container/___torch_mangle_2.py", line 12, in forward
_0 = getattr(self, "0")
_1 = getattr(self, "1")
input0 = (_0).forward(input, )
~~~~~~~~~~~ <--- HERE
return (_1).forward(input0, )
def __len__(self: __torch__.torch.nn.modules.container.___torch_mangle_2.Sequential) -> int:
File "code/__torch__/torch/nn/modules/container.py", line 24, in forward
_1 = getattr(self, "1")
input0 = (_0).forward(input, )
return (_1).forward(input0, )
~~~~~~~~~~~ <--- HERE
def __len__(self: __torch__.torch.nn.modules.container.Sequential) -> int:
return 2
File "code/__torch__/torch/nn/modules/activation.py", line 11, in forward
input: Tensor) -> Tensor:
_0 = __torch__.torch.nn.functional.leaky_relu
_1 = _0(input, 0.20000000000000001, False, )
~~ <--- HERE
return _1
File "code/__torch__/torch/nn/functional.py", line 8, in leaky_relu
result = result0
else:
result1 = torch.leaky_relu(input, negative_slope)
~~~~~~~~~~~~~~~~ <--- HERE
result = result1
return result
Traceback of TorchScript, original code (most recent call last):
File "C:\Users\Danylo\PycharmProjects\frame-interpolation-pytorch\feature_extractor.py", line 144, in forward
# want to generate.
capped_sub_levels = min(len(image_pyramid) - i, self.sub_levels)
sub_pyramids.append(self.extract_sublevels(image_pyramid[i], capped_sub_levels))
~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
# Below we generate the cascades of features on each level of the feature
# pyramid. Assuming sub_levels=3, The layout of the features will be
File "C:\Users\Danylo\PycharmProjects\frame-interpolation-pytorch\feature_extractor.py", line 110, in forward
pyramid = []
for i, layer in enumerate(self.convs):
head = layer(head)
~~~~~ <--- HERE
pyramid.append(head)
if i < n - 1:
File "C:\Users\Danylo\anaconda3\envs\research\lib\site-packages\torch\nn\modules\container.py", line 139, in forward
def forward(self, input):
for module in self:
input = module(input)
~~~~~~ <--- HERE
return input
File "C:\Users\Danylo\anaconda3\envs\research\lib\site-packages\torch\nn\modules\container.py", line 139, in forward
def forward(self, input):
for module in self:
input = module(input)
~~~~~~ <--- HERE
return input
File "C:\Users\Danylo\anaconda3\envs\research\lib\site-packages\torch\nn\modules\activation.py", line 772, in forward
def forward(self, input: Tensor) -> Tensor:
return F.leaky_relu(input, self.negative_slope, self.inplace)
~~~~~~~~~~~~ <--- HERE
File "C:\Users\Danylo\anaconda3\envs\research\lib\site-packages\torch\nn\functional.py", line 1633, in leaky_relu
result = torch._C._nn.leaky_relu_(input, negative_slope)
else:
result = torch._C._nn.leaky_relu(input, negative_slope)
~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
return result
RuntimeError: Allocation on device