comfyanonymous / ComfyUI

The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface.
https://www.comfy.org/
GNU General Public License v3.0
55.99k stars 5.91k forks source link

Error occurred when executing CLIPTextEncode #4654

Closed fahadshery closed 2 months ago

fahadshery commented 2 months ago

Expected Behavior

Can't seem to load the Prompt:

image

Actual Behavior

I have pasted the error in the logs section

Steps to Reproduce

  # stable diffusion

  stable-diffusion-base-download:
    build: ./stable-diffusion-webui-docker/services/download/
    image: stable-diffusion-base
    container_name: stable-diffusion-base
    environment:
      - PUID=${PUID:-1000}
      - PGID=${PGID:-1000}
    volumes:
      - /etc/localtime:/etc/localtime:ro
      - /etc/timezone:/etc/timezone:ro
      - ./stable-diffusion-webui-docker/data:/data

  comfy-webui:
    build: ./stable-diffusion-webui-docker/services/comfy/
    image: comfy-webui
    container_name: comfy-webui
    environment:
      - PUID=${PUID:-1000}
      - PGID=${PGID:-1000}
      - CLI_ARGS=
    volumes:
      - /etc/localtime:/etc/localtime:ro
      - /etc/timezone:/etc/timezone:ro
      - ./stable-diffusion-webui-docker/data:/data
      - ./stable-diffusion-webui-docker/output:/output
    stop_signal: SIGKILL
    tty: true
    deploy:
      resources:
        reservations:
          devices:
              - driver: nvidia
                device_ids: ['0']
                capabilities: [compute, utility]
    restart: unless-stopped

Debug Logs

Mounted .cache
Mounted comfy
Mounted input
[START] Security scan
[DONE] Security scan
## ComfyUI-Manager: installing dependencies done.
** ComfyUI startup time: 2024-08-24 12:44:32.737662
** Platform: Linux
** Python version: 3.10.14 (main, Mar 21 2024, 16:24:04) [GCC 11.2.0]
** Python executable: /opt/conda/bin/python
** ComfyUI Path: /stable-diffusion
** Log path: /stable-diffusion/comfyui.log

Prestartup times for custom nodes:
   1.9 seconds: /stable-diffusion/custom_nodes/ComfyUI-Manager

Total VRAM 24576 MB, total RAM 48135 MB
pytorch version: 2.3.0
Set vram state to: NORMAL_VRAM
Device: cuda:0 GRID P40-24Q : cudaMallocAsync
Using pytorch cross attention
****** User settings have been changed to be stored on the server instead of browser storage. ******
****** For multi-user setups add the --multi-user CLI argument to enable multiple user profiles. ******
[Prompt Server] web root: /stable-diffusion/web
Adding extra search path checkpoints /data/models/Stable-diffusion
Adding extra search path configs /data/models/Stable-diffusion
Adding extra search path vae /data/models/VAE
Adding extra search path loras /data/models/Lora
Adding extra search path upscale_models /data/models/RealESRGAN
Adding extra search path upscale_models /data/models/ESRGAN
Adding extra search path upscale_models /data/models/SwinIR
Adding extra search path upscale_models /data/models/GFPGAN
Adding extra search path hypernetworks /data/models/hypernetworks
Adding extra search path controlnet /data/models/ControlNet
Adding extra search path gligen /data/models/GLIGEN
Adding extra search path clip /data/models/CLIPEncoder
Adding extra search path embeddings /data/embeddings
Adding extra search path custom_nodes /data/config/comfy/custom_nodes
### Loading: ComfyUI-Manager (V2.50.2)
### ComfyUI Revision: 2610 [7df42b9a] | Released on '2024-08-23'

Import times for custom nodes:
   0.0 seconds: /stable-diffusion/custom_nodes/websocket_image_save.py
   0.1 seconds: /stable-diffusion/custom_nodes/ComfyUI-Manager

Starting server

To see the GUI go to: http://0.0.0.0:7860
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json
FETCH DATA from: /stable-diffusion/custom_nodes/ComfyUI-Manager/extension-node-map.json [DONE]
FETCH DATA from: /stable-diffusion/custom_nodes/ComfyUI-Manager/.cache/1742899825_extension-node-map.json [DONE]
FETCH DATA from: /stable-diffusion/custom_nodes/ComfyUI-Manager/.cache/1514988643_custom-node-list.json [DONE]
FETCH DATA from: /stable-diffusion/custom_nodes/ComfyUI-Manager/.cache/746607195_github-stats.json [DONE]
FETCH DATA from: /stable-diffusion/custom_nodes/ComfyUI-Manager/extension-node-map.json [DONE]
FETCH DATA from: /stable-diffusion/custom_nodes/ComfyUI-Manager/extension-node-map.json [DONE]
got prompt
model weight dtype torch.float16, manual cast: None
model_type EPS
Using pytorch attention in VAE
Using pytorch attention in VAE
/opt/conda/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:1601: FutureWarning: `clean_up_tokenization_spaces` was not set. It will be set to `True` by default. This behavior will be depracted in transformers v4.45, and will be then set to `False` by default. For more details check this issue: https://github.com/huggingface/transformers/issues/31884
  warnings.warn(
Requested to load SD1ClipModel
Loading 1 new model
!!! Exception during processing !!! CUDA error: operation not supported
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.

Traceback (most recent call last):
  File "/stable-diffusion/execution.py", line 317, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "/stable-diffusion/execution.py", line 192, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "/stable-diffusion/execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)
  File "/stable-diffusion/execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
  File "/stable-diffusion/nodes.py", line 65, in encode
    output = clip.encode_from_tokens(tokens, return_pooled=True, return_dict=True)
  File "/stable-diffusion/comfy/sd.py", line 125, in encode_from_tokens
    self.load_model()
  File "/stable-diffusion/comfy/sd.py", line 157, in load_model
    model_management.load_model_gpu(self.patcher)
  File "/stable-diffusion/comfy/model_management.py", line 554, in load_model_gpu
    return load_models_gpu([model])
  File "/stable-diffusion/comfy/model_management.py", line 540, in load_models_gpu
    cur_loaded_model = loaded_model.model_load(lowvram_model_memory, force_patch_weights=force_patch_weights)
  File "/stable-diffusion/comfy/model_management.py", line 326, in model_load
    raise e
  File "/stable-diffusion/comfy/model_management.py", line 322, in model_load
    self.real_model = self.model.patch_model(device_to=patch_model_to, lowvram_model_memory=lowvram_model_memory, load_weights=load_weights, force_patch_weights=force_patch_weights)
  File "/stable-diffusion/comfy/model_patcher.py", line 407, in patch_model
    self.load(device_to, lowvram_model_memory=lowvram_model_memory, force_patch_weights=force_patch_weights, full_load=full_load)
  File "/stable-diffusion/comfy/model_patcher.py", line 379, in load
    x[2].to(device_to)
  File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1173, in to
    return self._apply(convert)
  File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 804, in _apply
    param_applied = fn(param)
  File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1159, in convert
    return t.to(
RuntimeError: CUDA error: operation not supported
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.

Prompt executed in 5.74 seconds

Other

No response

ltdrdata commented 2 months ago

Redundant issue: https://github.com/comfyanonymous/ComfyUI/issues/1845#issuecomment-1962677211

fahadshery commented 2 months ago

Redundant issue: #1845 (comment)

just checked and thanks...had to add CLI_ARGS=--disable-cuda-malloc in the docker compose