Closed lior007 closed 1 month ago
using flux dev repo need all the files below: and make sure your repo path like D:/ComfyUI_windows_portable_nvidia/ComfyUI_windows_portable/ComfyUI/black-forest-labs/FLUX.1-dev or show me your comfyUI console error.
using flux dev repo need all the files below: and make sure your repo path like D:/ComfyUI_windows_portable_nvidia/ComfyUI_windows_portable/ComfyUI/black-forest-labs/FLUX.1-dev or show me your comfyUI console error.
thank you this is my repo D:/ComfyUI_windows_portable_nvidia/ComfyUI_windows_portable/ComfyUI/black-forest-labs/FLUX.1-dev
the error is:
got prompt Process using 2 roles,mode is img2img.... total_vram is 16379.5,aggressive_offload is True,offload is True using repo_id and ckpt ,start flux-pulid processing... !!! Exception during processing !!! exceptions must derive from BaseException Traceback (most recent call last): File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 323, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 198, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_StoryDiffusion\Storydiffusion_node.py", line 1396, in story_model_loader pipe=flux_loader(folder_paths,ckpt_path,repo_id,AutoencoderKL,save_model,model_type,pulid,clip_vision_path,NF4,vae_id,offload,aggressive_offload,pulid_ckpt,quantized_mode, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_StoryDiffusion\model_loader_utils.py", line 591, in flux_loader raise "Now,using pulid must choice ae from comfyUI vae menu" TypeError: exceptions must derive from BaseException File
use flux pulid need choice a ae
OK HERE IS NEW PRINT SCREEN NOW THERE IS NEW ERROR
!!! Exception during processing !!! 'list' object has no attribute 'replace' Traceback (most recent call last): File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 323, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 198, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_StoryDiffusion\Storydiffusion_node.py", line 1689, in story_sampler for value in gen: File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_StoryDiffusion\Storydiffusion_node.py", line 700, in process_generation id_image = pipe.generate_image( ^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\utils_contextlib.py", line 116, in decorate_context return func(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_StoryDiffusion\PuLID\app_flux.py", line 126, in generate_image inp = prepare(t5=self.t5, clip=self.clip, img=x, prompt=opts.text,if_repo=self.if_repo) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_StoryDiffusion\PuLID\flux\sampling.py", line 49, in prepare txt = t5(prompt) ^^^^^^^^^^ File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl return self._call_impl(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_StoryDiffusion\PuLID\flux\modules\conditioner.py", line 49, in forward tokens = self.clip_cf.tokenize(text) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 113, in tokenize return self.tokenizer.tokenize_with_weights(text, return_word_ids) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\comfy\text_encoders\flux.py", line 27, in tokenize_with_weights out["l"] = self.clip_l.tokenize_with_weights(text, return_word_ids) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\comfy\sd1_clip.py", line 462, in tokenize_with_weights text = escape_important(text) ^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\comfy\sd1_clip.py", line 283, in escape_important text = text.replace("\)", "\0\1") ^^^^^^^^^^^^ AttributeError: 'list' object has no attribute 'replace'
use unet checkpoints,like this huggingface please read readme
use unet checkpoints,like this huggingface please read readme
2.I used exactly the same settings as in this example: https://github.com/smthemex/ComfyUI_StoryDiffusion/blob/main/examples/flux_pulid_new.png
OK. NOW ITS WORLING!! THANKS
OSError: Error no file named pytorch_model.bin, model.safetensors, tf_model.h5, model.ckpt.index or flax_model.msgpack found in directory D:/ComfyUI_windows_portable_nvidia/ComfyUI_windows_portable/ComfyUI/black-forest-labs/FLUX.1-dev/text_encoder_2.
i have this files:
| ├── config.json | ├── model-00001-of-00002.safetensors | ├── model-00002-of-00002.safetensors | ├── model.safetensors.index.json
what to do?