Open AB00k opened 2 months ago
Basically, in the pipeline there is an NSFW check, that sometimes detect false positives. Maybe it could be useful to add it as parameter of the node to skip or not the check. If this could be an interesting thing I could open a pull request in the next days.
you can edit pipelines/OmsDiffusionPipeline.py can comment out this line: image, has_nsfw_concept = self.run_safety_checker(image, device, prompt_embeds.dtype) with has_nsfw_concept = None instead
@frankchieng I have tried it out but I got another error mentioned bellow.
E:\magic_clothing\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\huggingface_hub\file_download.py:1132: FutureWarning:
resume_downloadis deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use
force_download=True. warnings.warn( safety_checker\model.safetensors not found Loading pipeline components...: 0%| | 0/7 [00:00<?, ?it/s]E:\magic_clothing\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\models\clip\feature_extraction_clip.py:28: FutureWarning: The class CLIPFeatureExtractor is deprecated and will be removed in version 5 of Transformers. Please use CLIPImageProcessor instead. warnings.warn( Loading pipeline components...: 100%|████████████████████████████████████████████████████| 7/7 [00:08<00:00, 1.17s/it] ----checkpoints loaded from path: E:\magic_clothing\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_MagicClothing\checkpoints\cloth_segm.pth---- E:\magic_clothing\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\functional.py:3809: UserWarning: nn.functional.upsample is deprecated. Use nn.functional.interpolate instead. warnings.warn("nn.functional.upsample is deprecated. Use nn.functional.interpolate instead.") E:\magic_clothing\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\diffusers\models\attention_processor.py:1244: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at ..\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:455.) hidden_states = F.scaled_dot_product_attention( 55%|█████████████████████████████████████████████ | 11/20 [02:13<01:49, 12.19s/it]FETCH DATA from: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json [DONE] 100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [04:04<00:00, 12.21s/it] E:\magic_clothing\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\diffusers\models\lora.py:358: UserWarning: Plan failed with an OutOfMemoryError: Allocation on device (Triggered internally at ..\aten\src\ATen\native\cudnn\Conv_v8.cpp:924.) return F.conv2d( E:\magic_clothing\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\diffusers\models\lora.py:358: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ..\aten\src\ATen\native\cudnn\Conv_v8.cpp:919.) return F.conv2d( E:\magic_clothing\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\conv.py:456: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ..\aten\src\ATen\native\cudnn\Conv_v8.cpp:919.) return F.conv2d(input, weight, bias, self.stride, E:\magic_clothing\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\diffusers\image_processor.py:90: RuntimeWarning: invalid value encountered in cast images = (images * 255).round().astype("uint8") Prompt executed in 274.93 seconds
Even if the prompt is executed successfully it still generates black image:
got prompt safety_checker\model.safetensors not found Loading pipeline components...: 100%|████████████████████████████████████████████████████| 7/7 [00:07<00:00, 1.10s/it] ----checkpoints loaded from path: E:\magic_clothing\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_MagicClothing\checkpoints\cloth_segm.pth---- 100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [04:03<00:00, 12.18s/it] Prompt executed in 273.24 seconds
I'm trying to run the workflow and it's generating the complete black image, I even tried to run it on CPU still had this issue. Bellow is the log on cmd when I run the workflow:
Here are the parameters I'm using:![image](https://github.com/frankchieng/ComfyUI_MagicClothing/assets/121571684/64c87db5-8927-47ef-b291-a9e6073d4e13)