s9roll7 / animatediff-cli-prompt-travel

animatediff prompt travel
Apache License 2.0
1.19k stars 105 forks source link

NotImplementedError: Could not run 'torchvision::deform_conv2d' with arguments from the 'CUDA' backend. #195

Closed dancemanUK closed 10 months ago

dancemanUK commented 10 months ago

mask from [girl] are output to stylize\2023-12-07T00-03-39-sample-mistoonanime_v20\fg_00_2023-12-07T03-48-07 stylize.py:1082Pretrained flow completion model has loaded... Pretrained ProPainter has loaded... Network [InpaintGenerator] was created. Total number of parameters: 39.4 million. To see the architecture, do print(network). 03:48:51 INFO mask.py:544 Processing: [90 frames]... ╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮ │ G:\animatediff-cli-prompt-travel-other\src\animatediff\stylize.py:1103 in create_mask │ │ │ │ 1100 │ bg_inpaint_dir.mkdir(parents=True, exist_ok=True) │ │ 1101 │ │ │ 1102 │ │ │ ❱ 1103 │ create_bg(frame_dir, bg_inpaint_dir, masked_area, │ │ 1104 │ │ │ use_half = True, │ │ 1105 │ │ │ raft_iter = 20, │ │ 1106 │ │ │ subvideo_length=80 if not low_vram else 50, │ │ │ │ G:\animatediff-cli-prompt-travel-other\src\animatediff\utils\mask.py:595 in create_bg │ │ │ │ 592 │ │ │ │ e_f = min(flow_length, f + subvideo_length + pad_len) │ │ 593 │ │ │ │ pad_len_s = max(0, f) - s_f │ │ 594 │ │ │ │ pad_len_e = e_f - min(flow_length, f + subvideo_length) │ │ ❱ 595 │ │ │ │ pred_flows_bisub, = fix_flow_complete.forward_bidirect_flow( │ │ 596 │ │ │ │ │ (gt_flows_bi[0][:, s_f:e_f], gt_flows_bi[1][:, s_f:e_f]), │ │ 597 │ │ │ │ │ flow_masks[:, s_f:e_f+1]) │ │ 598 │ │ │ │ pred_flows_bi_sub = fix_flow_complete.combine_flow( │ │ │ │ G:\animatediff-cli-prompt-travel-other\src\animatediff\repo\ProPainter\model\recurrent_flow_comp │ │ letion.py:327 in forward_bidirect_flow │ │ │ │ 324 │ │ │ │ 325 │ │ # -- completion -- │ │ 326 │ │ # forward │ │ ❱ 327 │ │ pred_flows_forward, pred_edges_forward = self.forward(masked_flows_forward, mask │ │ 328 │ │ │ │ 329 │ │ # backward │ │ 330 │ │ masked_flows_backward = torch.flip(masked_flows_backward, dims=[1]) │ │ │ │ G:\animatediff-cli-prompt-travel-other\src\animatediff\repo\ProPainter\model\recurrent_flow_comp │ │ letion.py:288 in forward │ │ │ │ 285 │ │ feat_mid = self.mid_dilation(feat_e2) # b c t h w │ │ 286 │ │ feat_mid = feat_mid.permute(0,2,1,3,4) # b t c h w │ │ 287 │ │ │ │ ❱ 288 │ │ feat_prop = self.feat_prop_module(feat_mid) │ │ 289 │ │ feat_prop = featprop.view(-1, 128, h//8, w//8) # b*t c h w │ │ 290 │ │ │ │ 291 │ │ , c, _, h_f, w_f = feat_e1.shape │ │ │ │ G:\animatediff-cli-prompt-travel-other\venv\lib\site-packages\torch\nn\modules\module.py:1518 in │ │ _wrapped_call_impl │ │ │ │ 1515 │ │ if self._compiled_call_impl is not None: │ │ 1516 │ │ │ return self._compiled_call_impl(*args, kwargs) # type: ignore[misc] │ │ 1517 │ │ else: │ │ ❱ 1518 │ │ │ return self._call_impl(*args, *kwargs) │ │ 1519 │ │ │ 1520 │ def _call_impl(self, args, kwargs): │ │ 1521 │ │ forward_call = (self._slow_forward if torch._C._get_tracing_state() else self.fo │ │ │ │ G:\animatediff-cli-prompt-travel-other\venv\lib\site-packages\torch\nn\modules\module.py:1527 in │ │ _call_impl │ │ │ │ 1524 │ │ if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks │ │ 1525 │ │ │ │ or _global_backward_pre_hooks or _global_backward_hooks │ │ 1526 │ │ │ │ or _global_forward_hooks or _global_forward_pre_hooks): │ │ ❱ 1527 │ │ │ return forward_call(*args, kwargs) │ │ 1528 │ │ │ │ 1529 │ │ try: │ │ 1530 │ │ │ result = None │ │ │ │ G:\animatediff-cli-prompt-travel-other\src\animatediff\repo\ProPainter\model\recurrent_flow_comp │ │ letion.py:101 in forward │ │ │ │ 98 │ │ │ │ │ │ │ 99 │ │ │ │ │ cond = torch.cat([cond_n1, feat_current, cond_n2], dim=1) # conditio │ │ 100 │ │ │ │ │ feat_prop = torch.cat([feat_prop, featn2], dim=1) # two order feat │ │ ❱ 101 │ │ │ │ │ feat_prop = self.deform_align[module_name](feat_prop, cond) │ │ 102 │ │ │ │ │ │ 103 │ │ │ │ # fuse current features │ │ 104 │ │ │ │ feat = [feat_current] + \ │ │ │ │ G:\animatediff-cli-prompt-travel-other\venv\lib\site-packages\torch\nn\modules\module.py:1518 in │ │ _wrapped_call_impl │ │ │ │ 1515 │ │ if self._compiled_call_impl is not None: │ │ 1516 │ │ │ return self._compiled_call_impl(*args, *kwargs) # type: ignore[misc] │ │ 1517 │ │ else: │ │ ❱ 1518 │ │ │ return self._call_impl(args, kwargs) │ │ 1519 │ │ │ 1520 │ def _call_impl(self, *args, kwargs): │ │ 1521 │ │ forward_call = (self._slow_forward if torch._C._get_tracing_state() else self.fo │ │ │ │ G:\animatediff-cli-prompt-travel-other\venv\lib\site-packages\torch\nn\modules\module.py:1527 in │ │ _call_impl │ │ │ │ 1524 │ │ if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks │ │ 1525 │ │ │ │ or _global_backward_pre_hooks or _global_backward_hooks │ │ 1526 │ │ │ │ or _global_forward_hooks or _global_forward_pre_hooks): │ │ ❱ 1527 │ │ │ return forward_call(*args, *kwargs) │ │ 1528 │ │ │ │ 1529 │ │ try: │ │ 1530 │ │ │ result = None │ │ │ │ G:\animatediff-cli-prompt-travel-other\src\animatediff\repo\ProPainter\model\recurrent_flow_comp │ │ letion.py:42 in forward │ │ │ │ 39 │ │ # mask │ │ 40 │ │ mask = torch.sigmoid(mask) │ │ 41 │ │ │ │ ❱ 42 │ │ return torchvision.ops.deform_conv2d(x, offset, self.weight, self.bias, │ │ 43 │ │ │ │ │ │ │ │ │ │ │ self.stride, self.padding, │ │ 44 │ │ │ │ │ │ │ │ │ │ │ self.dilation, mask) │ │ 45 │ │ │ │ G:\animatediff-cli-prompt-travel-other\venv\lib\site-packages\torchvision\ops\deform_conv.py:92 │ │ in deform_conv2d │ │ │ │ 89 │ │ │ f"Got offset.shape[1]={offset.shape[1]}, while 2 weight.size[2] weight.s │ │ 90 │ │ ) │ │ 91 │ │ │ ❱ 92 │ return torch.ops.torchvision.deform_conv2d( │ │ 93 │ │ input, │ │ 94 │ │ weight, │ │ 95 │ │ offset, │ │ │ │ G:\animatediff-cli-prompt-travel-other\venv\lib\site-packages\torch_ops.py:692 in call │ │ │ │ 689 │ │ # is still callable from JIT │ │ 690 │ │ # We save the function ptr as the op attribute on │ │ 691 │ │ # OpOverloadPacket to access it here. │ │ ❱ 692 │ │ return self._op(args, kwargs or {}) │ │ 693 │ │ │ 694 │ # TODO: use this to make a dir │ │ 695 │ def overloads(self): │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ NotImplementedError: Could not run 'torchvision::deform_conv2d' with arguments from the 'CUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'torchvision::deform_conv2d' is only available for these backends: [CPU, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMTIA, AutogradPrivateUse1, AutogradPrivateUse2,AutogradPrivateUse3, AutogradMeta, AutogradNestedTensor, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher].

CPU: registered at C:\actions-runner_work\vision\vision\pytorch\vision\torchvision\csrc\ops\cpu\deform_conv2d_kernel.cpp:1162 [kernel] BackendSelect: fallthrough registered at ..\aten\src\ATen\core\BackendSelectFallbackKernel.cpp:3 [backend fallback] Python: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:153 [backend fallback] FuncTorchDynamicLayerBackMode: registered at ..\aten\src\ATen\functorch\DynamicLayer.cpp:498 [backend fallback] Functionalize: registered at ..\aten\src\ATen\FunctionalizeFallbackKernel.cpp:290 [backend fallback] Named: registered at ..\aten\src\ATen\core\NamedRegistrations.cpp:7 [backend fallback] Conjugate: registered at ..\aten\src\ATen\ConjugateFallback.cpp:17 [backend fallback] Negative: registered at ..\aten\src\ATen\native\NegateFallback.cpp:19 [backend fallback] ZeroTensor: registered at ..\aten\src\ATen\ZeroTensorFallback.cpp:86 [backend fallback] ADInplaceOrView: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:86 [backend fallback] AutogradOther: registered at C:\actions-runner_work\vision\vision\pytorch\vision\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel] AutogradCPU: registered at C:\actions-runner_work\vision\vision\pytorch\vision\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel] AutogradCUDA: registered at C:\actions-runner_work\vision\vision\pytorch\vision\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel] AutogradHIP: registered at C:\actions-runner_work\vision\vision\pytorch\vision\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel] AutogradXLA: registered at C:\actions-runner_work\vision\vision\pytorch\vision\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel] AutogradMPS: registered at C:\actions-runner_work\vision\vision\pytorch\vision\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel] AutogradIPU: registered at C:\actions-runner_work\vision\vision\pytorch\vision\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel] AutogradXPU: registered at C:\actions-runner_work\vision\vision\pytorch\vision\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel] AutogradHPU: registered at C:\actions-runner_work\vision\vision\pytorch\vision\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel] AutogradVE: registered at C:\actions-runner_work\vision\vision\pytorch\vision\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel] AutogradLazy: registered at C:\actions-runner_work\vision\vision\pytorch\vision\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel] AutogradMTIA: registered at C:\actions-runner_work\vision\vision\pytorch\vision\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel] AutogradPrivateUse1: registered at C:\actions-runner_work\vision\vision\pytorch\vision\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel] AutogradPrivateUse2: registered at C:\actions-runner_work\vision\vision\pytorch\vision\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel] AutogradPrivateUse3: registered at C:\actions-runner_work\vision\vision\pytorch\vision\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel] AutogradMeta: registered at C:\actions-runner_work\vision\vision\pytorch\vision\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel] AutogradNestedTensor: registered at C:\actions-runner_work\vision\vision\pytorch\vision\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel] Tracer: registered at ..\torch\csrc\autograd\TraceTypeManual.cpp:296 [backend fallback] AutocastCPU: fallthrough registered at ..\aten\src\ATen\autocast_mode.cpp:382 [backend fallback] AutocastCUDA: fallthrough registered at ..\aten\src\ATen\autocast_mode.cpp:249 [backend fallback] FuncTorchBatched: registered at ..\aten\src\ATen\functorch\LegacyBatchingRegistrations.cpp:710 [backend fallback] FuncTorchVmapMode: fallthrough registered at ..\aten\src\ATen\functorch\VmapModeRegistrations.cpp:28 [backend fallback] Batched: registered at ..\aten\src\ATen\LegacyBatchingRegistrations.cpp:1075 [backend fallback] VmapMode: fallthrough registered at ..\aten\src\ATen\VmapModeRegistrations.cpp:33 [backend fallback] FuncTorchGradWrapper: registered at ..\aten\src\ATen\functorch\TensorWrapper.cpp:203 [backend fallback] PythonTLSSnapshot: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:161 [backend fallback] FuncTorchDynamicLayerFrontMode: registered at ..\aten\src\ATen\functorch\DynamicLayer.cpp:494 [backend fallback] PreDispatch: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:165 [backend fallback] PythonDispatcher: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:157 [backend fallback]

dancemanUK commented 10 months ago

torch 2.1.0+cu121 torchaudio 2.1.0 torchvision 0.16.0 tqdm 4.66.1 transformers 4.34.1 triton 2.0.0 typer 0.9.0 typing_extensions 4.4.0 tzdata 2023.3 urllib3 1.26.13 xformers 0.0.22.post7 yapf 0.40.2 zipp 3.17.0

dancemanUK commented 10 months ago

install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118

it' s OK !