kijai / ComfyUI-KwaiKolorsWrapper

Diffusers wrapper to run Kwai-Kolors model
Apache License 2.0
539 stars 26 forks source link

Multiple Files missing. #8

Open DavidSnow1 opened 3 months ago

DavidSnow1 commented 3 months ago

I installed your wrapper, and on my initial run, was told that I was missing config.json in the text_encoder folder. I went to manually download it from HuggingFace, and it tried to save under a different name, despite it being called config.json. That was a bit confusing, though it's an easy fix:

text

Anyway, I renamed the file to config.json and ran it again. Then I got this error:

OSError: Error no file named pytorch_model.bin, model.safetensors, tf_model.h5, model.ckpt.index or flax_model.msgpack found in directory C:\Users\Snow\ComfyUI_windows_portable\ComfyUI\models\diffusers\Kolors\text_encoder.

I cannot manually download these files because they don't exist in the text encoder folder on huggingface:

https://huggingface.co/Kwai-Kolors/Kolors/tree/main/text_encoder

Why is it looking for completely different files? I can't see flax_model.msgpack anywhere.

253701 commented 3 months ago

I also have this malfunction·

Error occurred when executing DownloadAndLoadChatGLM3:

Error no file named pytorch_model.bin, tf_model.h5, model.ckpt.index or flax_model.msgpack found in directory J:\ComfyUI_windows_portable\ComfyUI\models\diffusers\Kolors\text_encoder.

File "J:\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "J:\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "J:\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "J:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-KwaiKolorsWrapper-main\nodes.py", line 123, in loadmodel text_encoder = ChatGLMModel.from_pretrained( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "J:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\modeling_utils.py", line 3260, in from_pretrained raise EnvironmentError(

DavidSnow1 commented 3 months ago

I also have this malfunction·

I'm curious. Did you download this wrapper using Manager? That's what I did. I decided to delete everything and install it manually. Maybe that's the issue. I'll let you know how it turned out after I download that 16GB of files again. It'll take a while.

Miles-Johnson commented 3 months ago

I'll let you know how it turned out after I download that 16GB of files again. It'll take a while.

Just about to do this myself, same issue and process so far

Fezalion commented 3 months ago

After getting the same error, I restarted Comfy, now I am getting this after the download

Error occurred when executing DownloadAndLoadChatGLM3:

[Errno 2] No such file or directory: 'D:\\aistuff\\ComfyUI_windows_portable\\ComfyUI\\models\\diffusers\\Kolors\\text_encoder\\pytorch_model-00001-of-00007.bin'

File "D:\aistuff\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\aistuff\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\aistuff\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\aistuff\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-KwaiKolorsWrapper\nodes.py", line 113, in loadmodel
snapshot_download(repo_id=model,
File "D:\aistuff\ComfyUI_windows_portable\python_embeded\Lib\site-packages\huggingface_hub\utils\_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "D:\aistuff\ComfyUI_windows_portable\python_embeded\Lib\site-packages\huggingface_hub\_snapshot_download.py", line 294, in snapshot_download
thread_map(
File "D:\aistuff\ComfyUI_windows_portable\python_embeded\Lib\site-packages\tqdm\contrib\concurrent.py", line 69, in thread_map
return _executor_map(ThreadPoolExecutor, fn, *iterables, **tqdm_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\aistuff\ComfyUI_windows_portable\python_embeded\Lib\site-packages\tqdm\contrib\concurrent.py", line 51, in _executor_map
return list(tqdm_class(ex.map(fn, *iterables, chunksize=chunksize), **kwargs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\aistuff\ComfyUI_windows_portable\python_embeded\Lib\site-packages\tqdm\std.py", line 1181, in __iter__
for obj in iterable:
File "concurrent\futures\_base.py", line 619, in result_iterator
File "concurrent\futures\_base.py", line 317, in _result_or_cancel
File "concurrent\futures\_base.py", line 456, in result
File "concurrent\futures\_base.py", line 401, in __get_result
File "concurrent\futures\thread.py", line 58, in run
File "D:\aistuff\ComfyUI_windows_portable\python_embeded\Lib\site-packages\huggingface_hub\_snapshot_download.py", line 268, in _inner_hf_hub_download
return hf_hub_download(
^^^^^^^^^^^^^^^^
File "D:\aistuff\ComfyUI_windows_portable\python_embeded\Lib\site-packages\huggingface_hub\utils\_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "D:\aistuff\ComfyUI_windows_portable\python_embeded\Lib\site-packages\huggingface_hub\file_download.py", line 1202, in hf_hub_download
return _hf_hub_download_to_local_dir(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\aistuff\ComfyUI_windows_portable\python_embeded\Lib\site-packages\huggingface_hub\file_download.py", line 1487, in _hf_hub_download_to_local_dir
_download_to_tmp_and_move(
File "D:\aistuff\ComfyUI_windows_portable\python_embeded\Lib\site-packages\huggingface_hub\file_download.py", line 1894, in _download_to_tmp_and_move
_chmod_and_move(incomplete_path, destination_path)
File "D:\aistuff\ComfyUI_windows_portable\python_embeded\Lib\site-packages\huggingface_hub\file_download.py", line 1926, in _chmod_and_move
shutil.move(str(src), str(dst))
File "shutil.py", line 873, in move
File "shutil.py", line 448, in copy2
File "shutil.py", line 258, in copyfile
253701 commented 3 months ago

I also have this malfunction·

I'm curious. Did you download this wrapper using Manager? That's what I did. I decided to delete everything and install it manually. Maybe that's the issue. I'll let you know how it turned out after I download that 16GB of files again. It'll take a while.

Due to regional constraints, I am unable to link to huggingface. I downloaded the model manually. How should I fix this issue or where can I modify the download source address

kijai commented 3 months ago

I also have this malfunction·

Error occurred when executing DownloadAndLoadChatGLM3:

Error no file named pytorch_model.bin, tf_model.h5, model.ckpt.index or flax_model.msgpack found in directory J:\ComfyUI_windows_portable\ComfyUI\models\diffusers\Kolors\text_encoder.

File "J:\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "J:\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "J:\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "J:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-KwaiKolorsWrapper-main\nodes.py", line 123, in loadmodel text_encoder = ChatGLMModel.from_pretrained( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "J:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\modeling_utils.py", line 3260, in from_pretrained raise EnvironmentError(

I think you should update transformers, you can do that by running this while in your

ComfyUI_windows_portable\python_embeded -folder:

python.exe -m pip install -U transformers

kijai commented 3 months ago

Hey, insteasd of suggesting it , add it to installation script so it will update transformers when installing

Absolutely not, that'd be fine if this was the only thing we want to run, but we shouldn't assume every other node runs with it. This is why the Manager has blacklisted some packages and never updates them too.

kijai commented 3 months ago

Can You tell why its actually looking for files from this one ? https://huggingface.co/FacebookAI/roberta-base/tree/main

What do you mean by that? I just tested all this on fresh portable, multiple times (good thing I have fiber), and never seen anything like that or had any problems. I did clean up the filters a bit as there's many files that are not used at all in their repo, it's a mess for sure. They had forgotten cache files in the folder too, which could be cause for some of these issue, I'm now filtering that out too but old downloads will still have them. Not that I had issues even with them, but absolutely no reason to include those.

kijai commented 3 months ago

The filenames are exactly what is missing in comfy log and this guys 1st message.We all have errors asking for tf_model.h5, which is https://huggingface.co/FacebookAI/roberta-base/blob/main/tf_model.h5 Im on portable and fast network too. Still not resolved even tho i did download all files from this facebook repo I think downloading this chat thing needs tighter code, or come on just let us download the files and add loader node

Those are just generic model names, not specific to any repo. It's looking for a model file but possibly due to transformers version or missing config file, it doesn't know to look for the sharded files. For reference the minimum files needed in the text_encoder folder are these:

image

You already can manually download them, one thing to note when doing that is that huggingface (for some asinine reason) adds the folder name to the download names, so you easily end up with "text_encoder_config.json" when it needs to be just "config.json"

kijai commented 3 months ago

All this i know, i need to manually point to the proper chat weights , show us how to resolve this, we have files on the disk, no mistakes in filenames, what now ? Usually weights have loaders , this repo doesnt have one,it has downloader, but its mismatching .... A node that loads pytorch_model-00001-of-00007.bin and consecutive files

All such a node could do is point to a folder, the folder structure would have to be identical anyway for transformers. Possibly what we could do is save the weights into a single .safetensors, never done that with transformers models so I'm unsure. Then the config files could be included in the repo and just single model selected.

DavidSnow1 commented 3 months ago

Well, this is turning into a nightmare. In a desperate bid to get these nodes working, I updated transformers using:

python.exe -m pip install -U transformers

Then I reinstalled the nodes for the umpteenth time. Now they are entirely broken. Just red in the graph. Doesn't matter if I install using Manager, or install using the instructions on the front page. They no longer work. Considering I had installed the nodes multiple times before with no problems, the transformers update is almost certainly the culprit.

Can anyone decipher this for me, or give me general advice on how to deal with these warnings? Next time I install Comfy, it won't be the portable version. It's a colossal pain in the ass.

warning

kijai commented 3 months ago

Well, this is turning into a nightmare. In a desperate bid to get these nodes working, I updated transformers using:

python.exe -m pip install -U transformers

Then I reinstalled the nodes for the umpteenth time. Now they are entirely broken. Just red in the graph. Doesn't matter if I install using Manager, or install using the instructions on the front page. They no longer work. Considering I had installed the nodes multiple times before with no problems, the transformers update is almost certainly the culprit.

Can anyone decipher this for me, or give me general advice on how to deal with these warnings? Next time I install Comfy, it won't be the portable version. It's a colossal pain in the ass.

warning

That just shows it was already up to date and that command actually didn't do anything. If the nodes are red, there's an import error and you can find it in the ComfyUI console.

Fezalion commented 3 months ago

Latest commit and the transformers update seems to fix the problem, I can generate now.

kijai commented 3 months ago

I have now packed the model to a single .safetensors to simplify the manual download:

https://huggingface.co/Kijai/ChatGLM3-safetensors/tree/main

loaded from:

ComfyUI\models\LLM\checkpoints

image

kijai commented 3 months ago

Great, do you think sd3 text encoder will work with this? Its much lower filesize and if not, any way to prune this one ? Oh... it is pruned actually ?

Nope, SD3 uses google T5 xxl, this one is specifically trained for ChatGLM3.

I'm not sure if/how to save the quantized weights... comparing them now and 8bit is honestly just as good as fp16, 4bit is totally different but not really worse always.

DavidSnow1 commented 3 months ago

Well, I am at a complete loss here. I have downloaded countless custom nodes, and I've never run into a situation like this - especially when it comes to nodes I've successfully installed multiple times before.

Remember - This only started to happen after I updated transformers. Here are some console messages:

kwai

kwai2

kijai commented 3 months ago

The main issue for me is loading times, it takes like 10 minutes to load text encoder on 3090... way way too long

Hold on, figured out a way to save it quantized.

kijai commented 3 months ago

Can you do that for all the weights ? So loading is as fast as it can possibly be ?

For the text encoder, yes:

https://huggingface.co/Kijai/ChatGLM3-safetensors/tree/main

Also need to update the nodes to use them.

The unet of the SDXL model is already fp16, and the VAE is no longer loaded at all as we use Comfy VAE node.

kijai commented 3 months ago

Can you do that for all the weights ? So loading is as fast as it can possibly be ?

For the text encoder, yes:

https://huggingface.co/Kijai/ChatGLM3-safetensors/tree/main

The unet of the SDXL model is already fp16, and the VAE is no longer loaded at all as we use Comfy VAE node.

Well, I am at a complete loss here. I have downloaded countless custom nodes, and I've never run into a situation like this - especially when it comes to nodes I've successfully installed multiple times before.

Remember - This only started to happen after I updated transformers. Here are some console messages:

kwai

kwai2

Quite a lot of other nodes failing there too, but for this one the culprit is on this line:

image

Which means you need to update diffusers, can be done same way as transfomers.

I obviously can't speak for every node, but all mine that use diffusers work with the latest version. Minimum required here should be 0.28.2.

853603530 commented 3 months ago

????TypeError: stat: path should be string, bytes, os.PathLike or integer, not NoneType

kijai commented 2 months ago

Looks like its borked again? It still downloads multiple safetensors instead of new pruned models ... I have to revert cause loading times are so ridiculous ( 10 minutes on 3090 ) i dont really want to test this model out for this very reason, that idea to make it dependant on some chat is not good at all... Maybe in china everyone has RTX 9000 Also pruning text encoder should have 0 effect on actual image quality, translation might be a bit worse and thats about it. Tried to use single safetensors 4bit, it cant load it and it does not download it, it downloads full text encoder non pruned multi safetensors.

There's no auto download for the quantized models for now, they have to manually downloaded and loaded with the new normal loader node that only loads single file.

DavidSnow1 commented 2 months ago

By the way Kijai, I just wanted to thank you for everything you do.