Closed Gyramuur closed 20 hours ago
When you download manually from huggingface, it adds the subfolder name to the file name. For example you have tokenizer_tokenizer_config.json
there when it should just be tokenizer_config.json
. Same with some of the other filenames.
When you download manually from huggingface, it adds the subfolder name to the file name. For example you have
tokenizer_tokenizer_config.json
there when it should just betokenizer_config.json
. Same with some of the other filenames.
Hey thanks for the quick response, that got me past the error. Now I'm hitting "Error no file named pytorch_model.bin, model.safetensors, tf_model.h5, model.ckpt.index or flax_model.msgpack found in directory Z:\webui\ComfyUI_windows_portable\ComfyUI\models\pyramidflow\pyramid-flow-sd3\text_encoder_3"
Inside that folder I have model-00001-of-00002.safetensors and model-00002-of-00002.safetensors.
Obviously I can't rename them both to be just "model.safetensors" and I don't know how to merge them, so I'm not sure what to do.
When you download manually from huggingface, it adds the subfolder name to the file name. For example you have
tokenizer_tokenizer_config.json
there when it should just betokenizer_config.json
. Same with some of the other filenames.Hey thanks for the quick response, that got me past the error. Now I'm hitting "Error no file named pytorch_model.bin, model.safetensors, tf_model.h5, model.ckpt.index or flax_model.msgpack found in directory Z:\webui\ComfyUI_windows_portable\ComfyUI\models\pyramidflow\pyramid-flow-sd3\text_encoder_3"
Inside that folder I have model-00001-of-00002.safetensors and model-00002-of-00002.safetensors.
Obviously I can't rename them both to be just "model.safetensors" and I don't know how to merge them, so I'm not sure what to do.
It's the same issue, the config in this case should just be config.json.
For reference my local files:
N:\AI\ComfyUI\models\pyramidflow\pyramid-flow-sd3
│ LICENSE.md
│ README.md
│
├───causal_video_vae
│ config.json
│ diffusion_pytorch_model.bin
│
├───diffusion_transformer_384p
│ config.json
│ diffusion_pytorch_model.bin
│
├───diffusion_transformer_768p
│ config.json
│ diffusion_pytorch_model.bin
│
├───text_encoder
│ config.json
│ model.safetensors
│
├───text_encoder_2
│ config.json
│ model.safetensors
│
├───text_encoder_3
│ config.json
│ model-00001-of-00002.safetensors
│ model-00002-of-00002.safetensors
│ model.safetensors.index.json
│
├───tokenizer
│ merges.txt
│ special_tokens_map.json
│ tokenizer_config.json
│ vocab.json
│
├───tokenizer_2
│ merges.txt
│ special_tokens_map.json
│ tokenizer_config.json
│ vocab.json
│
└───tokenizer_3
special_tokens_map.json
spiece.model
tokenizer.json
tokenizer_config.json
model.safetensors.index.json
Hey thanks again. :) That got me past that error, in this case I was missing "model.safetensors.index.json" from the text_encoder_3 folder.
Now I am getting the same issue as #3 but since it's unrelated I will close this one as resolved. Thanks :D
model.safetensors.index.json
Hey thanks again. :) That got me past that error, in this case I was missing "model.safetensors.index.json" from the text_encoder_3 folder.
Now I am getting the same issue as #3 but since it's unrelated I will close this one as resolved. Thanks :D
That one should be fixed if you just update the nodes.
Maybe there is something I'm doing wrong here. I've downloaded everything at https://huggingface.co/rain1011/pyramid-flow-sd3/tree/main, copied the folder structure, and made sure to place all the folders and files into "ComfyUI_windows_portable\ComfyUI\models\pyramidflow\pyramid-flow-sd3"
The full error is:
"Can't load tokenizer for 'Z:\webui\ComfyUI_windows_portable\ComfyUI\models\pyramidflow\pyramid-flow-sd3\tokenizer'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'Z:\webui\ComfyUI_windows_portable\ComfyUI\models\pyramidflow\pyramid-flow-sd3\tokenizer' is the correct path to a directory containing all relevant files for a CLIPTokenizer tokenizer."
Here's the tree structure of the model folder:
Z:. └───pyramid-flow-sd3 ├───casual_video_vae │ causal_video_vae_config.json │ diffusion_pytorch_model.bin │ ├───diffusion_transformer_384p │ diffusion_pytorch_model.bin │ diffusion_transformer_384p_config.json │ ├───diffusion_transformer_768p │ config.json │ diffusion_pytorch_model.safetensors │ ├───text_encoder │ model.safetensors │ text_encoder_config.json │ ├───text_encoder_2 │ model.safetensors │ text_encoder_2_config.json │ ├───text_encoder_3 │ model-00001-of-00002.safetensors │ model-00002-of-00002.safetensors │ text_encoder_3_config.json │ ├───tokenizer │ tokenizer_merges.txt │ tokenizer_special_tokens_map.json │ tokenizer_tokenizer_config.json │ tokenizer_vocab.json │ ├───tokenizer_2 │ tokenizer_2_merges.txt │ tokenizer_2_special_tokens_map.json │ tokenizer_2_tokenizer_config.json │ tokenizer_2_vocab.json │ └───tokenizer_3 spiece.model tokenizer_3_special_tokens_map.json tokenizer_3_tokenizer.json tokenizer_3_tokenizer_config.json
But upon queuing the prompt I get hit with the titular error. Any ideas?