jtydhr88 / ComfyUI-Unique3D

ComfyUI Unique3D is custom nodes that running AiuniAI/Unique3D into ComfyUI
Apache License 2.0
100 stars 6 forks source link

NotImplementedError: auto not supported. Supported strategies are: balanced #13

Closed CoraAI closed 2 days ago

CoraAI commented 3 days ago

got prompt PyTorch version 2.2.0+cu121 available. JAX version 0.4.30 available. Warning! extra parameter in cli is not verified, may cause erros. Loading pipeline components...: 100%|████████████████████████████████████████████████████| 5/5 [00:00<00:00, 87.79it/s] You have disabled the safety checker for <class 'custum_3d_diffusion.custum_pipeline.unifield_pipeline_img2mvimg.StableDiffusionImage2MVCustomPipeline'> by passing safety_checker=None. Ensure that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling it only for use-cases that involve analyzing network behavior or auditing its results. For more information, please have a look at https://github.com/huggingface/diffusers/pull/254 . RGB image not RGBA! still remove bg! E:\anaconda3\envs\comfy_test311\Lib\site-packages\diffusers\models\attention_processor.py:1476: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at ..\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:263.) hidden_states = F.scaled_dot_product_attention( 0%| | 0/30 [00:00<?, ?it/s]Warning! condition_latents is not None, but self_attn_ref is not enabled! This warning will only be raised once. 100%|██████████████████████████████████████████████████████████████████████████████████| 30/30 [00:01<00:00, 27.42it/s] model_index.json: 100%|████████████████████████████████████████████████████████████████| 541/541 [00:00<00:00, 542kB/s] tokenizer/special_tokens_map.json: 100%|███████████████████████████████████████████████| 472/472 [00:00<00:00, 468kB/s] scheduler/scheduler_config.json: 100%|████████████████████████████████████████████████████████| 308/308 [00:00<?, ?B/s] tokenizer/tokenizer_config.json: 100%|████████████████████████████████████████████████████████| 806/806 [00:00<?, ?B/s] safety_checker/config.json: 100%|█████████████████████████████████████████████████████████| 4.72k/4.72k [00:00<?, ?B/s] text_encoder/config.json: 100%|███████████████████████████████████████████████████████████████| 617/617 [00:00<?, ?B/s] (…)ature_extractor/preprocessor_config.json: 100%|████████████████████████████████████████████| 342/342 [00:00<?, ?B/s] tokenizer/merges.txt: 100%|█████████████████████████████████████████████████████████| 525k/525k [00:00<00:00, 3.08MB/s] vae/config.json: 100%|████████████████████████████████████████████████████████████████████████| 547/547 [00:00<?, ?B/s] unet/config.json: 100%|███████████████████████████████████████████████████████████████████████| 743/743 [00:00<?, ?B/s] tokenizer/vocab.json: 100%|███████████████████████████████████████████████████████| 1.06M/1.06M [00:00<00:00, 1.08MB/s] diffusion_pytorch_model.safetensors: 100%|██████████████████████████████████████████| 335M/335M [01:42<00:00, 3.26MB/s] model.safetensors: 100%|████████████████████████████████████████████████████████████| 492M/492M [04:06<00:00, 2.00MB/s] diffusion_pytorch_model.safetensors: 100%|████████████████████████████████████████| 3.44G/3.44G [20:28<00:00, 2.80MB/s] Fetching 14 files: 100%|███████████████████████████████████████████████████████████████| 14/14 [20:30<00:00, 87.86s/it] Loading pipeline components...: 100%|████████████████████████████████████████████████████| 6/6 [00:01<00:00, 4.83it/s] Pipelines loaded with dtype=torch.float16 cannot run with cpu device. It is not recommended to move them to cpu as running them will fail. Please make sure to use an accelerator to run the pipeline in inference, due to the lack of support forfloat16 operations on this device in PyTorch. Please, remove the torch_dtype=torch.float16 argument, or use another device for inference. Pipelines loaded with dtype=torch.float16 cannot run with cpu device. It is not recommended to move them to cpu as running them will fail. Please make sure to use an accelerator to run the pipeline in inference, due to the lack of support forfloat16 operations on this device in PyTorch. Please, remove the torch_dtype=torch.float16 argument, or use another device for inference. Pipelines loaded with dtype=torch.float16 cannot run with cpu device. It is not recommended to move them to cpu as running them will fail. Please make sure to use an accelerator to run the pipeline in inference, due to the lack of support forfloat16 operations on this device in PyTorch. Please, remove the torch_dtype=torch.float16 argument, or use another device for inference. !!! Exception during processing!!! auto not supported. Supported strategies are: balanced Traceback (most recent call last): File "E:\00_AI_Program\comfyTEST\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\00_AI_Program\comfyTEST\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\00_AI_Program\comfyTEST\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\00_AI_Program\comfyTEST\ComfyUI\custom_nodes\ComfyUI-Unique3D\run.py", line 62, in run pipe = load_common_sd15_pipe( ^^^^^^^^^^^^^^^^^^^^^^ File "E:\00_AI_Program\comfyTEST\ComfyUI\custom_nodes\ComfyUI-Unique3D\scripts\sd_model_zoo.py", line 107, in load_common_sd15_pipe pipe: StableDiffusionPipeline = model_from_ckpt_or_pretrained( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\00_AI_Program\comfyTEST\ComfyUI\custom_nodes\ComfyUI-Unique3D\scripts\sd_model_zoo.py", line 44, in model_from_ckpt_or_pretrained pipe = model_cls.from_pretrained(ckpt_or_pretrained, torch_dtype=torch_dtype, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\anaconda3\envs\comfy_test311\Lib\site-packages\huggingface_hub\utils_validators.py", line 114, in _inner_fn return fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^ File "E:\anaconda3\envs\comfy_test311\Lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 680, in from_pretrained raise NotImplementedError( NotImplementedError: auto not supported. Supported strategies are: balanced

in web window, process stopped at Unique3D Load Pinpline node framed with purple color.

jtydhr88 commented 3 days ago

This issue because you install diffusers greater than 0.28, and currently this repo only supports up to 0.27.2. Likely you are using conda env not the embedded ComfyUI python, please refer to my bat to check what my procedure for embedded python

CoraAI commented 3 days ago

This issue because you install diffusers greater than 0.28, and currently this repo only supports up to 0.27.2. Likely you are using conda env not the embedded ComfyUI python, please refer to my bat to check what my procedure for embedded python

So your suggestion is using embedded ComfyUI python? but I got some other repositories, is that OK? Also I choose to use conda envs due to high failure probability of comfyUI tryout, debugging nodes' issues costs too much for a non-programmer like me. I've read your setup procedure, just thought it might work if I could have every step set using python.exe of exclusive env instead of the default embedded python, no error occurred until I run the workflow.

is there any ways to fix it? anyway I'm all ears, if starting a new one is the easiest way then I 'll do it,

jtydhr88 commented 3 days ago

in fact I will suggest to use conda, it is my favorite method as well. In order to fix your issue you can pip uninstall diffusers first,Then pip install diffusers==0.27.2

jtydhr88 commented 3 days ago

However, if you take a look at my bat file, it requires other steps to setup, if you encounter other issues, you can refer to it

CoraAI commented 3 days ago

in fact I will suggest to use conda, it is my favorite method as well. In order to fix your issue you can pip uninstall diffusers first,Then pip install diffusers==0.27.2

Perfectly worked! And the results are pretty good, totally worth trying, Thanks!