comfyanonymous / ComfyUI

The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface.
https://www.comfy.org/
GNU General Public License v3.0
58.62k stars 6.22k forks source link

### Error loading model: missing text_projection.weight #5222

Open 22333-bit opened 1 month ago

22333-bit commented 1 month ago

Your question

Environment:

Description: I encountered an error when trying to load the model in ComfyUI. The error message states that the text_projection.weight is missing.

Steps to Reproduce:

  1. Download the following model files and place them in G:\ComfyUI_windows_portable\ComfyUI\models\clip:
    • pytorch_model.bin
    • config.json
    • tokenizer.json
  2. Start ComfyUI.
  3. Attempt to load the model.

Expected result: The model should load without errors.

Actual result: An error is thrown indicating that text_projection.weight is missing.

Additional information:

Logs

G:\ComfyUI_windows_portable>.\python_embeded\python.exe -s ComfyUI\main.py --windows-standalone-build
[START] Security scan
[DONE] Security scan
## ComfyUI-Manager: installing dependencies done.
** ComfyUI startup time: 2024-10-12 16:29:23.932413
** Platform: Windows
** Python version: 3.11.9 (tags/v3.11.9:de54cf5, Apr  2 2024, 10:12:12) [MSC v.1938 64 bit (AMD64)]
** Python executable: G:\ComfyUI_windows_portable\python_embeded\python.exe
** ComfyUI Path: G:\ComfyUI_windows_portable\ComfyUI
** Log path: G:\ComfyUI_windows_portable\comfyui.log

Prestartup times for custom nodes:
   0.0 seconds: G:\ComfyUI_windows_portable\ComfyUI\custom_nodes\rgthree-comfy
   0.6 seconds: G:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager

Total VRAM 24564 MB, total RAM 65349 MB
pytorch version: 2.4.1+cu124
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 4090 : cudaMallocAsync
Using pytorch cross attention
[Prompt Server] web root: G:\ComfyUI_windows_portable\ComfyUI\web
G:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\kornia\feature\lightglue.py:44: FutureWarning: `torch.cuda.amp.custom_fwd(args...)` is deprecated. Please use `torch.amp.custom_fwd(args..., device_type='cuda')` instead.
  @torch.cuda.amp.custom_fwd(cast_inputs=torch.float32)
### Loading: ComfyUI-Impact-Pack (V7.5.2)
### Loading: ComfyUI-Impact-Pack (Subpack: V0.7)
[Impact Pack] Wildcards loading done.
### Loading: ComfyUI-Manager (V2.50.3)
### ComfyUI Revision: 2754 [1b808952] *DETACHED | Released on '2024-10-10'

[rgthree] Loaded 42 fantastic nodes.
[rgthree] NOTE: Will NOT use rgthree's optimized recursive execution as ComfyUI has changed.

Import times for custom nodes:
   0.0 seconds: G:\ComfyUI_windows_portable\ComfyUI\custom_nodes\AIGODLIKE-ComfyUI-Translation
   0.0 seconds: G:\ComfyUI_windows_portable\ComfyUI\custom_nodes\websocket_image_save.py
   0.0 seconds: G:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-portrait-master-zh-cn
   0.0 seconds: G:\ComfyUI_windows_portable\ComfyUI\custom_nodes\rgthree-comfy
   0.0 seconds: G:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_essentials
   0.2 seconds: G:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager
   0.5 seconds: G:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack

Starting server

To see the GUI go to: http://127.0.0.1:8188
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json
FETCH DATA from: G:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager\extension-node-map.json [DONE]
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json
got prompt
Using pytorch attention in VAE
Using pytorch attention in VAE
clip missing: ['text_projection.weight']
Requested to load FluxClipModel_
Loading 1 new model
loaded completely 0.0 9319.23095703125 True
G:\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\attention.py:407: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:555.)
  out = torch.nn.functional.scaled_dot_product_attention(q, k, v, attn_mask=mask, dropout_p=0.0, is_causal=False)
model weight dtype torch.float8_e4m3fn, manual cast: torch.bfloat16
model_type FLUX
Requested to load Flux
Loading 1 new model
loaded completely 0.0 11350.048889160156 True
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:13<00:00,  1.52it/s]
Requested to load AutoencodingEngine
Loading 1 new model
loaded completely 0.0 159.87335777282715 True
G:\ComfyUI_windows_portable\ComfyUI\nodes.py:1506: RuntimeWarning: invalid value encountered in cast
  img = Image.fromarray(np.clip(i, 0, 255).astype(np.uint8))
Prompt executed in 25.75 seconds

Other

No response

GPU-server commented 1 month ago

Did you find answer?

BassJMagan commented 1 month ago

I have same issue with Flux and SD3.5 but it doesn't seem to affect output?

22333-bit commented 3 weeks ago

你找到答案了吗?

no