Official Implementation of "Learning Inclusion Matching for Animation Paint Bucket Colorization"
Other
258
stars
23
forks
source link
When i training the Test folder by the command line:python basicsr/test.py -opt options/test/basicpbc_pbch_test_option.yml .But is dosen't work in my gpu, #8
2024-04-07 18:06:31,686 INFO: Loading BasicPBC model from ckpt/basicpbc.pth, with param key: [params_ema].
2024-04-07 18:06:31,771 INFO: Model [PBCModel] is created.
2024-04-07 18:06:31,771 INFO: Testing PaintBucket_Char...
0%| | 0/2990 [00:00<?, ?it/s]C:\ProgramData\miniconda3\envs\abinated\lib\site-packages\torch\functional.py:507: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ..\aten\src\ATen\native\TensorShape.cpp:3550.)
return _VF.meshgrid(tensors, kwargs) # type: ignore[attr-defined]
0%| | 0/2990 [00:02<?, ?it/s]
Traceback (most recent call last):
File "C:\Users\admin\BasicPBC\basicsr\test.py", line 45, in
test_pipeline(root_path)
File "C:\Users\admin\BasicPBC\basicsr\test.py", line 40, in test_pipeline
model.validation(test_loader, current_iter=opt["name"], tb_logger=None, save_img=opt["val"]["save_img"])
File "c:\users\admin\basicpbc\basicsr\models\base_model.py", line 48, in validation
self.nondist_validation(dataloader, current_iter, tb_logger, save_img)
File "c:\users\admin\basicpbc\basicsr\models\pbc_model.py", line 129, in nondist_validation
model_inference.inference_frame_by_frame(save_path, save_img, accu, self_prop)
File "c:\users\admin\basicpbc\basicsr\models\pbc_model.py", line 246, in inference_frame_by_frame
match_tensor = self.model(self.dis_data_to_cuda(test_data))
File "C:\ProgramData\miniconda3\envs\abinated\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, *kwargs)
File "C:\ProgramData\miniconda3\envs\abinated\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(args, kwargs)
File "c:\users\admin\basicpbc\basicsr\archs\basicpbc_arch.py", line 597, in forward
desc = self.segment_desc(warpped_target_img, data["segment"], data["line"], use_offset=True)
File "C:\ProgramData\miniconda3\envs\abinated\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, kwargs)
File "C:\ProgramData\miniconda3\envs\abinated\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, *kwargs)
File "c:\users\admin\basicpbc\basicsr\archs\basicpbc_arch.py", line 408, in forward
x = self.encoder(img, line, use_offset)
File "C:\ProgramData\miniconda3\envs\abinated\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(args, kwargs)
File "C:\ProgramData\miniconda3\envs\abinated\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, kwargs)
File "c:\users\admin\basicpbc\basicsr\archs\basicpbc_arch.py", line 365, in forward
x1 = self.DCN1(x, use_offset)
File "C:\ProgramData\miniconda3\envs\abinated\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, *kwargs)
File "C:\ProgramData\miniconda3\envs\abinated\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(args, kwargs)
File "c:\users\admin\basicpbc\basicsr\archs\basicpbc_arch.py", line 289, in forward
color_fea = torchvision.ops.deform_conv2d(
File "C:\ProgramData\miniconda3\envs\abinated\lib\site-packages\torchvision\ops\deform_conv.py", line 92, in deform_conv2d
return torch.ops.torchvision.deform_conv2d(
File "C:\ProgramData\miniconda3\envs\abinated\lib\site-packages\torch_ops.py", line 755, in call
return self._op(*args, **(kwargs or {}))
NotImplementedError: Could not run 'torchvision::deform_conv2d' with arguments from the 'CUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'torchvision::deform_conv2d' is only available for these backends: [CPU, Meta, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMTIA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradMeta, AutogradNestedTensor, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, BatchedNestedTensor, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher].
CPU: registered at C:\actions-runner_work\vision\vision\pytorch\vision\torchvision\csrc\ops\cpu\deform_conv2d_kernel.cpp:1162 [kernel]
Meta: registered at /dev/null:19 [kernel]
BackendSelect: fallthrough registered at ..\aten\src\ATen\core\BackendSelectFallbackKernel.cpp:3 [backend fallback]
Python: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:154 [backend fallback]
FuncTorchDynamicLayerBackMode: registered at ..\aten\src\ATen\functorch\DynamicLayer.cpp:498 [backend fallback]
Functionalize: registered at ..\aten\src\ATen\FunctionalizeFallbackKernel.cpp:324 [backend fallback]
Named: registered at ..\aten\src\ATen\core\NamedRegistrations.cpp:7 [backend fallback]
Conjugate: registered at ..\aten\src\ATen\ConjugateFallback.cpp:17 [backend fallback]
Negative: registered at ..\aten\src\ATen\native\NegateFallback.cpp:19 [backend fallback]
ZeroTensor: registered at ..\aten\src\ATen\ZeroTensorFallback.cpp:86 [backend fallback]
ADInplaceOrView: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:86 [backend fallback]
AutogradOther: registered at C:\actions-runner_work\vision\vision\pytorch\vision\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradCPU: registered at C:\actions-runner_work\vision\vision\pytorch\vision\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradCUDA: registered at C:\actions-runner_work\vision\vision\pytorch\vision\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradHIP: registered at C:\actions-runner_work\vision\vision\pytorch\vision\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradXLA: registered at C:\actions-runner_work\vision\vision\pytorch\vision\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradMPS: registered at C:\actions-runner_work\vision\vision\pytorch\vision\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradIPU: registered at C:\actions-runner_work\vision\vision\pytorch\vision\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradXPU: registered at C:\actions-runner_work\vision\vision\pytorch\vision\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradHPU: registered at C:\actions-runner_work\vision\vision\pytorch\vision\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradVE: registered at C:\actions-runner_work\vision\vision\pytorch\vision\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradLazy: registered at C:\actions-runner_work\vision\vision\pytorch\vision\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradMTIA: registered at C:\actions-runner_work\vision\vision\pytorch\vision\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradPrivateUse1: registered at C:\actions-runner_work\vision\vision\pytorch\vision\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradPrivateUse2: registered at C:\actions-runner_work\vision\vision\pytorch\vision\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradPrivateUse3: registered at C:\actions-runner_work\vision\vision\pytorch\vision\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradMeta: registered at C:\actions-runner_work\vision\vision\pytorch\vision\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradNestedTensor: registered at C:\actions-runner_work\vision\vision\pytorch\vision\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
Tracer: registered at ..\torch\csrc\autograd\TraceTypeManual.cpp:297 [backend fallback]
AutocastCPU: fallthrough registered at ..\aten\src\ATen\autocast_mode.cpp:378 [backend fallback]
AutocastCUDA: registered at C:\actions-runner_work\vision\vision\pytorch\vision\torchvision\csrc\ops\autocast\deform_conv2d_kernel.cpp:48 [kernel]
FuncTorchBatched: registered at ..\aten\src\ATen\functorch\LegacyBatchingRegistrations.cpp:720 [backend fallback]
BatchedNestedTensor: registered at ..\aten\src\ATen\functorch\LegacyBatchingRegistrations.cpp:746 [backend fallback]
FuncTorchVmapMode: fallthrough registered at ..\aten\src\ATen\functorch\VmapModeRegistrations.cpp:28 [backend fallback]
Batched: registered at ..\aten\src\ATen\LegacyBatchingRegistrations.cpp:1075 [backend fallback]
VmapMode: fallthrough registered at ..\aten\src\ATen\VmapModeRegistrations.cpp:33 [backend fallback]
FuncTorchGradWrapper: registered at ..\aten\src\ATen\functorch\TensorWrapper.cpp:203 [backend fallback]
PythonTLSSnapshot: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:162 [backend fallback]
FuncTorchDynamicLayerFrontMode: registered at ..\aten\src\ATen\functorch\DynamicLayer.cpp:494 [backend fallback]
PreDispatch: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:166 [backend fallback]
PythonDispatcher: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:158 [backend fallback]
Just see it ,when step in the Processing progress with apparent ,it suddenly jump a lot things,its like turn to the cpu, but it failed !!!!!!
2024-04-07 18:06:31,686 INFO: Loading BasicPBC model from ckpt/basicpbc.pth, with param key: [params_ema]. 2024-04-07 18:06:31,771 INFO: Model [PBCModel] is created. 2024-04-07 18:06:31,771 INFO: Testing PaintBucket_Char... 0%| | 0/2990 [00:00<?, ?it/s]C:\ProgramData\miniconda3\envs\abinated\lib\site-packages\torch\functional.py:507: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ..\aten\src\ATen\native\TensorShape.cpp:3550.) return _VF.meshgrid(tensors, kwargs) # type: ignore[attr-defined] 0%| | 0/2990 [00:02<?, ?it/s] Traceback (most recent call last): File "C:\Users\admin\BasicPBC\basicsr\test.py", line 45, in
test_pipeline(root_path)
File "C:\Users\admin\BasicPBC\basicsr\test.py", line 40, in test_pipeline
model.validation(test_loader, current_iter=opt["name"], tb_logger=None, save_img=opt["val"]["save_img"])
File "c:\users\admin\basicpbc\basicsr\models\base_model.py", line 48, in validation
self.nondist_validation(dataloader, current_iter, tb_logger, save_img)
File "c:\users\admin\basicpbc\basicsr\models\pbc_model.py", line 129, in nondist_validation
model_inference.inference_frame_by_frame(save_path, save_img, accu, self_prop)
File "c:\users\admin\basicpbc\basicsr\models\pbc_model.py", line 246, in inference_frame_by_frame
match_tensor = self.model(self.dis_data_to_cuda(test_data))
File "C:\ProgramData\miniconda3\envs\abinated\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, *kwargs)
File "C:\ProgramData\miniconda3\envs\abinated\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(args, kwargs)
File "c:\users\admin\basicpbc\basicsr\archs\basicpbc_arch.py", line 597, in forward
desc = self.segment_desc(warpped_target_img, data["segment"], data["line"], use_offset=True)
File "C:\ProgramData\miniconda3\envs\abinated\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, kwargs)
File "C:\ProgramData\miniconda3\envs\abinated\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, *kwargs)
File "c:\users\admin\basicpbc\basicsr\archs\basicpbc_arch.py", line 408, in forward
x = self.encoder(img, line, use_offset)
File "C:\ProgramData\miniconda3\envs\abinated\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(args, kwargs)
File "C:\ProgramData\miniconda3\envs\abinated\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, kwargs)
File "c:\users\admin\basicpbc\basicsr\archs\basicpbc_arch.py", line 365, in forward
x1 = self.DCN1(x, use_offset)
File "C:\ProgramData\miniconda3\envs\abinated\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, *kwargs)
File "C:\ProgramData\miniconda3\envs\abinated\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(args, kwargs)
File "c:\users\admin\basicpbc\basicsr\archs\basicpbc_arch.py", line 289, in forward
color_fea = torchvision.ops.deform_conv2d(
File "C:\ProgramData\miniconda3\envs\abinated\lib\site-packages\torchvision\ops\deform_conv.py", line 92, in deform_conv2d
return torch.ops.torchvision.deform_conv2d(
File "C:\ProgramData\miniconda3\envs\abinated\lib\site-packages\torch_ops.py", line 755, in call
return self._op(*args, **(kwargs or {}))
NotImplementedError: Could not run 'torchvision::deform_conv2d' with arguments from the 'CUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'torchvision::deform_conv2d' is only available for these backends: [CPU, Meta, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMTIA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradMeta, AutogradNestedTensor, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, BatchedNestedTensor, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher].
CPU: registered at C:\actions-runner_work\vision\vision\pytorch\vision\torchvision\csrc\ops\cpu\deform_conv2d_kernel.cpp:1162 [kernel] Meta: registered at /dev/null:19 [kernel] BackendSelect: fallthrough registered at ..\aten\src\ATen\core\BackendSelectFallbackKernel.cpp:3 [backend fallback] Python: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:154 [backend fallback] FuncTorchDynamicLayerBackMode: registered at ..\aten\src\ATen\functorch\DynamicLayer.cpp:498 [backend fallback] Functionalize: registered at ..\aten\src\ATen\FunctionalizeFallbackKernel.cpp:324 [backend fallback] Named: registered at ..\aten\src\ATen\core\NamedRegistrations.cpp:7 [backend fallback] Conjugate: registered at ..\aten\src\ATen\ConjugateFallback.cpp:17 [backend fallback] Negative: registered at ..\aten\src\ATen\native\NegateFallback.cpp:19 [backend fallback] ZeroTensor: registered at ..\aten\src\ATen\ZeroTensorFallback.cpp:86 [backend fallback] ADInplaceOrView: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:86 [backend fallback] AutogradOther: registered at C:\actions-runner_work\vision\vision\pytorch\vision\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel] AutogradCPU: registered at C:\actions-runner_work\vision\vision\pytorch\vision\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel] AutogradCUDA: registered at C:\actions-runner_work\vision\vision\pytorch\vision\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel] AutogradHIP: registered at C:\actions-runner_work\vision\vision\pytorch\vision\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel] AutogradXLA: registered at C:\actions-runner_work\vision\vision\pytorch\vision\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel] AutogradMPS: registered at C:\actions-runner_work\vision\vision\pytorch\vision\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel] AutogradIPU: registered at C:\actions-runner_work\vision\vision\pytorch\vision\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel] AutogradXPU: registered at C:\actions-runner_work\vision\vision\pytorch\vision\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel] AutogradHPU: registered at C:\actions-runner_work\vision\vision\pytorch\vision\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel] AutogradVE: registered at C:\actions-runner_work\vision\vision\pytorch\vision\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel] AutogradLazy: registered at C:\actions-runner_work\vision\vision\pytorch\vision\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel] AutogradMTIA: registered at C:\actions-runner_work\vision\vision\pytorch\vision\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel] AutogradPrivateUse1: registered at C:\actions-runner_work\vision\vision\pytorch\vision\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel] AutogradPrivateUse2: registered at C:\actions-runner_work\vision\vision\pytorch\vision\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel] AutogradPrivateUse3: registered at C:\actions-runner_work\vision\vision\pytorch\vision\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel] AutogradMeta: registered at C:\actions-runner_work\vision\vision\pytorch\vision\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel] AutogradNestedTensor: registered at C:\actions-runner_work\vision\vision\pytorch\vision\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel] Tracer: registered at ..\torch\csrc\autograd\TraceTypeManual.cpp:297 [backend fallback] AutocastCPU: fallthrough registered at ..\aten\src\ATen\autocast_mode.cpp:378 [backend fallback] AutocastCUDA: registered at C:\actions-runner_work\vision\vision\pytorch\vision\torchvision\csrc\ops\autocast\deform_conv2d_kernel.cpp:48 [kernel] FuncTorchBatched: registered at ..\aten\src\ATen\functorch\LegacyBatchingRegistrations.cpp:720 [backend fallback] BatchedNestedTensor: registered at ..\aten\src\ATen\functorch\LegacyBatchingRegistrations.cpp:746 [backend fallback] FuncTorchVmapMode: fallthrough registered at ..\aten\src\ATen\functorch\VmapModeRegistrations.cpp:28 [backend fallback] Batched: registered at ..\aten\src\ATen\LegacyBatchingRegistrations.cpp:1075 [backend fallback] VmapMode: fallthrough registered at ..\aten\src\ATen\VmapModeRegistrations.cpp:33 [backend fallback] FuncTorchGradWrapper: registered at ..\aten\src\ATen\functorch\TensorWrapper.cpp:203 [backend fallback] PythonTLSSnapshot: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:162 [backend fallback] FuncTorchDynamicLayerFrontMode: registered at ..\aten\src\ATen\functorch\DynamicLayer.cpp:494 [backend fallback] PreDispatch: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:166 [backend fallback] PythonDispatcher: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:158 [backend fallback] Just see it ,when step in the Processing progress with apparent ,it suddenly jump a lot things,its like turn to the cpu, but it failed !!!!!!