Uminosachi / sd-webui-inpaint-anything

Inpaint Anything extension performs stable diffusion inpainting on a browser UI using masks from Segment Anything.
Apache License 2.0
1.08k stars 99 forks source link

cuda not working , now only availble run on cpu. #94

Open seset opened 11 months ago

seset commented 11 months ago

I just found this super great extention recently, it is far more easy to use than Segment Anything.

i tried install method from url,or from webui, and even mannully git pull directly , all turns out error below when using any of the SAM model,

I am not encouting cuda(cuda 11.8) related problem when using SD or Segment Anything.

so now i have to turn on cpu option and use the small model FASTSAM-x instead..

Please do help...

2023-09-17 22:19:38,438 - Inpaint Anything - INFO - input_image: (953, 950, 3) uint8 2023-09-17 22:19:39,120 - Inpaint Anything - INFO - SamAutomaticMaskGenerator sam_vit_b_01ec64.pth Traceback (most recent call last): File "D:\stable-diffusion\stable-diffusion-webui\extensions\sd-webui-inpaint-anything\scripts\inpaint_anything.py", line 168, in run_sam sam_masks = inpalib.generate_sam_masks(input_image, sam_model_id, anime_style_chk) File "D:\stable-diffusion\stable-diffusion-webui\extensions\sd-webui-inpaint-anything\inpalib\samlib.py", line 139, in generate_sam_masks sam_masks = sam_mask_generator.generate(input_image) File "D:\stable-diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, kwargs) File "D:\stable-diffusion\stable-diffusion-webui\extensions\sd-webui-inpaint-anything\segment_anything_fb\automatic_mask_generator.py", line 163, in generate mask_data = self._generate_masks(image) File "D:\stable-diffusion\stable-diffusion-webui\extensions\sd-webui-inpaint-anything\segment_anything_fb\automatic_mask_generator.py", line 206, in _generate_masks crop_data = self._process_crop(image, crop_box, layer_idx, orig_size) File "D:\stable-diffusion\stable-diffusion-webui\extensions\sd-webui-inpaint-anything\segment_anything_fb\automatic_mask_generator.py", line 251, in _process_crop keep_by_nms = batched_nms( File "D:\stable-diffusion\stable-diffusion-webui\venv\lib\site-packages\torchvision\ops\boxes.py", line 75, in batched_nms return _batched_nms_coordinate_trick(boxes, scores, idxs, iou_threshold) File "D:\stable-diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\jit_trace.py", line 1220, in wrapper return fn(*args, *kwargs) File "D:\stable-diffusion\stable-diffusion-webui\venv\lib\site-packages\torchvision\ops\boxes.py", line 94, in _batched_nms_coordinate_trick keep = nms(boxes_for_nms, scores, iou_threshold) File "D:\stable-diffusion\stable-diffusion-webui\venv\lib\site-packages\torchvision\ops\boxes.py", line 41, in nms return torch.ops.torchvision.nms(boxes, scores, iou_threshold) File "D:\stable-diffusion\stable-diffusion-webui\venv\lib\site-packages\torch_ops.py", line 502, in call return self._op(args, kwargs or {}) NotImplementedError: Could not run 'torchvision::nms' with arguments from the 'CUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'torchvision::nms' is only available for these backends: [CPU, QuantizedCPU, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradMPS, AutogradXPU, AutogradHPU, AutogradLazy, AutogradMeta, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PythonDispatcher].

CPU: registered at C:\Users\circleci\project\torchvision\csrc\ops\cpu\nms_kernel.cpp:112 [kernel] QuantizedCPU: registered at C:\Users\circleci\project\torchvision\csrc\ops\quantized\cpu\qnms_kernel.cpp:124 [kernel] BackendSelect: fallthrough registered at ..\aten\src\ATen\core\BackendSelectFallbackKernel.cpp:3 [backend fallback] Python: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:144 [backend fallback] FuncTorchDynamicLayerBackMode: registered at ..\aten\src\ATen\functorch\DynamicLayer.cpp:491 [backend fallback] Functionalize: registered at ..\aten\src\ATen\FunctionalizeFallbackKernel.cpp:280 [backend fallback] Named: registered at ..\aten\src\ATen\core\NamedRegistrations.cpp:7 [backend fallback] Conjugate: registered at ..\aten\src\ATen\ConjugateFallback.cpp:17 [backend fallback] Negative: registered at ..\aten\src\ATen\native\NegateFallback.cpp:19 [backend fallback] ZeroTensor: registered at ..\aten\src\ATen\ZeroTensorFallback.cpp:86 [backend fallback] ADInplaceOrView: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:63 [backend fallback] AutogradOther: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:30 [backend fallback] AutogradCPU: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:34 [backend fallback] AutogradCUDA: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:42 [backend fallback] AutogradXLA: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:46 [backend fallback] AutogradMPS: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:54 [backend fallback] AutogradXPU: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:38 [backend fallback] AutogradHPU: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:67 [backend fallback] AutogradLazy: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:50 [backend fallback] AutogradMeta: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:58 [backend fallback] Tracer: registered at ..\torch\csrc\autograd\TraceTypeManual.cpp:294 [backend fallback] AutocastCPU: fallthrough registered at ..\aten\src\ATen\autocast_mode.cpp:487 [backend fallback] AutocastCUDA: fallthrough registered at ..\aten\src\ATen\autocast_mode.cpp:354 [backend fallback] FuncTorchBatched: registered at ..\aten\src\ATen\functorch\LegacyBatchingRegistrations.cpp:815 [backend fallback] FuncTorchVmapMode: fallthrough registered at ..\aten\src\ATen\functorch\VmapModeRegistrations.cpp:28 [backend fallback] Batched: registered at ..\aten\src\ATen\LegacyBatchingRegistrations.cpp:1073 [backend fallback] VmapMode: fallthrough registered at ..\aten\src\ATen\VmapModeRegistrations.cpp:33 [backend fallback] FuncTorchGradWrapper: registered at ..\aten\src\ATen\functorch\TensorWrapper.cpp:210 [backend fallback] PythonTLSSnapshot: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:152 [backend fallback] FuncTorchDynamicLayerFrontMode: registered at ..\aten\src\ATen\functorch\DynamicLayer.cpp:487 [backend fallback] PythonDispatcher: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:148 [backend fallback]

2023-09-17 22:19:44,241 - Inpaint Anything - ERROR - Could not run 'torchvision::nms' with arguments from the 'CUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'torchvision::nms' is only available for these backends: [CPU, QuantizedCPU, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradMPS, AutogradXPU, AutogradHPU, AutogradLazy, AutogradMeta, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PythonDispatcher].

CPU: registered at C:\Users\circleci\project\torchvision\csrc\ops\cpu\nms_kernel.cpp:112 [kernel] QuantizedCPU: registered at C:\Users\circleci\project\torchvision\csrc\ops\quantized\cpu\qnms_kernel.cpp:124 [kernel] BackendSelect: fallthrough registered at ..\aten\src\ATen\core\BackendSelectFallbackKernel.cpp:3 [backend fallback] Python: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:144 [backend fallback] FuncTorchDynamicLayerBackMode: registered at ..\aten\src\ATen\functorch\DynamicLayer.cpp:491 [backend fallback] Functionalize: registered at ..\aten\src\ATen\FunctionalizeFallbackKernel.cpp:280 [backend fallback] Named: registered at ..\aten\src\ATen\core\NamedRegistrations.cpp:7 [backend fallback] Conjugate: registered at ..\aten\src\ATen\ConjugateFallback.cpp:17 [backend fallback] Negative: registered at ..\aten\src\ATen\native\NegateFallback.cpp:19 [backend fallback] ZeroTensor: registered at ..\aten\src\ATen\ZeroTensorFallback.cpp:86 [backend fallback] ADInplaceOrView: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:63 [backend fallback] AutogradOther: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:30 [backend fallback] AutogradCPU: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:34 [backend fallback] AutogradCUDA: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:42 [backend fallback] AutogradXLA: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:46 [backend fallback] AutogradMPS: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:54 [backend fallback] AutogradXPU: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:38 [backend fallback] AutogradHPU: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:67 [backend fallback] AutogradLazy: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:50 [backend fallback] AutogradMeta: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:58 [backend fallback] Tracer: registered at ..\torch\csrc\autograd\TraceTypeManual.cpp:294 [backend fallback] AutocastCPU: fallthrough registered at ..\aten\src\ATen\autocast_mode.cpp:487 [backend fallback] AutocastCUDA: fallthrough registered at ..\aten\src\ATen\autocast_mode.cpp:354 [backend fallback] FuncTorchBatched: registered at ..\aten\src\ATen\functorch\LegacyBatchingRegistrations.cpp:815 [backend fallback] FuncTorchVmapMode: fallthrough registered at ..\aten\src\ATen\functorch\VmapModeRegistrations.cpp:28 [backend fallback] Batched: registered at ..\aten\src\ATen\LegacyBatchingRegistrations.cpp:1073 [backend fallback] VmapMode: fallthrough registered at ..\aten\src\ATen\VmapModeRegistrations.cpp:33 [backend fallback] FuncTorchGradWrapper: registered at ..\aten\src\ATen\functorch\TensorWrapper.cpp:210 [backend fallback] PythonTLSSnapshot: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:152 [backend fallback] FuncTorchDynamicLayerFrontMode: registered at ..\aten\src\ATen\functorch\DynamicLayer.cpp:487 [backend fallback] PythonDispatcher: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:148 [backend fallback]

====================================================================================

And sometimes when reloading webui, the following error pops up:

Restarting UI... Closing server running on port: 7860 2023-09-17 22:14:45,587 - ControlNet - INFO - ControlNet v1.1.410 sd-webui-prompt-all-in-one background API service started successfully. D:\stable-diffusion\stable-diffusion-webui\extensions\sd-webui-inpaint-anything\scripts\inpaint_anything.py:926: GradioDeprecationWarning: The style method is deprecated. Please set these arguments in the constructor instead. out_image = gr.Image(label="Inpainted image", elem_id="ia_out_image", type="pil", D:\stable-diffusion\stable-diffusion-webui\extensions\sd-webui-inpaint-anything\scripts\inpaint_anything.py:942: GradioDeprecationWarning: The style method is deprecated. Please set these arguments in the constructor instead. cleaner_out_image = gr.Image(label="Cleaned image", elem_id="ia_cleaner_out_image", type="pil", D:\stable-diffusion\stable-diffusion-webui\extensions\sd-webui-inpaint-anything\scripts\inpaint_anything.py:1088: GradioDeprecationWarning: The style method is deprecated. Please set these arguments in the constructor instead. cn_out_image = gr.Image(label="Inpainted image", elem_id="ia_cn_out_image", type="pil", D:\stable-diffusion\stable-diffusion-webui\extensions\sd-webui-inpaint-anything\scripts\inpaint_anything.py:1127: GradioDeprecationWarning: The style method is deprecated. Please set these arguments in the constructor instead. sam_image = gr.Image(label="Segment Anything image", elem_id="ia_sam_image", type="numpy", tool="sketch", brush_radius=8, D:\stable-diffusion\stable-diffusion-webui\extensions\sd-webui-inpaint-anything\scripts\inpaint_anything.py:1138: GradioDeprecationWarning: The style method is deprecated. Please set these arguments in the constructor instead. sel_mask = gr.Image(label="Selected mask image", elem_id="ia_sel_mask", type="numpy", tool="sketch", brush_radius=12, D:\stable-diffusion\stable-diffusion-webui\extensions\sd-webui-inpaint-anything\scripts\inpaint_anything.py:1141: GradioDeprecationWarning: The style method is deprecated. Please set these arguments in the constructor instead. with gr.Row().style(equal_height=False): Running on local URL: http://127.0.0.1:7860

To create a public link, set share=True in launch().

Uminosachi commented 11 months ago

Have you tried deleting the venv folder inside the stable-diffusion-webui directory, then running webui.bat, and reinstalling torch and torchvision from scratch?

Uminosachi commented 11 months ago

I've implemented a fix for the 'torchvision::nms' NotImplementedError issue. Please update the sd-webui-inpaint-anything repository and give it a try.

seset commented 11 months ago

I've implemented a fix for the 'torchvision::nms' NotImplementedError issue. Please update the sd-webui-inpaint-anything repository and give it a try.

Problem perfectly solved! Thank you!