Open sanchitwadehra opened 8 months ago
Seems to be this part of comfy/model_management.py
def should_use_bf16(device=None):
if is_intel_xpu():
return True
if device is None:
device = torch.device("cuda")
props = torch.cuda.get_device_properties(device)
if props.major >= 8:
return True
return False
Add --force-fp16 flag to startup works for me.
Seems to be this part of comfy/model_management.py
def should_use_bf16(device=None): if is_intel_xpu(): return True if device is None: device = torch.device("cuda") props = torch.cuda.get_device_properties(device) if props.major >= 8: return True return False
Add --force-fp16 flag to startup works for me.
I'm sorry, can you write in detail what exactly to fix and where to insert your text --force-fp16 ???thank you
So My PC specs are :- CPU :- AMD 7 5800x GPU :- AMD 6600xt RAM :- 16GB VRAM :- 8GB
I had working comfyui using this :- https://youtu.be/8rB7RqKvU5U?si=-DxRLZKw3xy4bqPY
everything was working correctly until this commit in the main branch :- Stable Cascade Stage C.
and I started getting this error :- "Error occurred when executing CheckpointLoaderSimple:
Torch not compiled with CUDA enabled
File "C:\Users\WADEHRA\conda_cm\ComfyUI\execution.py", line 152, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "C:\Users\WADEHRA\conda_cm\ComfyUI\execution.py", line 82, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "C:\Users\WADEHRA\conda_cm\ComfyUI\execution.py", line 75, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "C:\Users\WADEHRA\conda_cm\ComfyUI\nodes.py", line 552, in load_checkpoint out = comfy.sd.load_checkpoint_guess_config(ckpt_path, output_vae=True, output_clip=True, embedding_directory=folder_paths.get_folder_paths("embeddings")) File "C:\Users\WADEHRA\conda_cm\ComfyUI\comfy\sd.py", line 459, in load_checkpoint_guess_config unet_dtype = model_management.unet_dtype(model_params=parameters, supported_dtypes=model_config.supported_inference_dtypes) File "C:\Users\WADEHRA\conda_cm\ComfyUI\comfy\model_management.py", line 502, in unet_dtype if should_use_bf16(device): File "C:\Users\WADEHRA\conda_cm\ComfyUI\comfy\model_management.py", line 781, in should_use_bf16 props = torch.cuda.get_device_properties(device) File "C:\Users\WADEHRA\anaconda3\envs\conda_cm\lib\site-packages\torch\cuda__init.py", line 395, in get_device_properties _lazy_init() # will define _get_device_properties File "C:\Users\WADEHRA\anaconda3\envs\conda_cm\lib\site-packages\torch\cuda\init__.py", line 239, in _lazy_init raise AssertionError("Torch not compiled with CUDA enabled")"
this is the terminal :- "Microsoft Windows [Version 10.0.19045.4046] (c) Microsoft Corporation. All rights reserved.
C:\Users\WADEHRA\conda_cm\ComfyUI>conda activate conda_cm
(conda_cm) C:\Users\WADEHRA\conda_cm\ComfyUI>git reset --hard f83109f09bec04f39f028c275b4eb1231adba00a HEAD is now at f83109f Stable Cascade Stage C.
(conda_cm) C:\Users\WADEHRA\conda_cm\ComfyUI>python main.py --directml --lowvram Using directml with device: Total VRAM 1024 MB, total RAM 16328 MB Set vram state to: LOW_VRAM Device: privateuseone VAE dtype: torch.float32 Using sub quadratic optimization for cross attention, if you have memory or speed issues try using: --use-split-cross-attention Starting server
To see the GUI go to: http://127.0.0.1:8188 got prompt ERROR:root:!!! Exception during processing !!! ERROR:root:Traceback (most recent call last): File "C:\Users\WADEHRA\conda_cm\ComfyUI\execution.py", line 152, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "C:\Users\WADEHRA\conda_cm\ComfyUI\execution.py", line 82, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "C:\Users\WADEHRA\conda_cm\ComfyUI\execution.py", line 75, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "C:\Users\WADEHRA\conda_cm\ComfyUI\nodes.py", line 552, in load_checkpoint out = comfy.sd.load_checkpoint_guess_config(ckpt_path, output_vae=True, output_clip=True, embedding_directory=folder_paths.get_folder_paths("embeddings")) File "C:\Users\WADEHRA\conda_cm\ComfyUI\comfy\sd.py", line 459, in load_checkpoint_guess_config unet_dtype = model_management.unet_dtype(model_params=parameters, supported_dtypes=model_config.supported_inference_dtypes) File "C:\Users\WADEHRA\conda_cm\ComfyUI\comfy\model_management.py", line 502, in unet_dtype if should_use_bf16(device): File "C:\Users\WADEHRA\conda_cm\ComfyUI\comfy\model_management.py", line 781, in should_use_bf16 props = torch.cuda.get_device_properties(device) File "C:\Users\WADEHRA\anaconda3\envs\conda_cm\lib\site-packages\torch\cuda__init.py", line 395, in get_device_properties _lazy_init() # will define _get_device_properties File "C:\Users\WADEHRA\anaconda3\envs\conda_cm\lib\site-packages\torch\cuda\init__.py", line 239, in _lazy_init raise AssertionError("Torch not compiled with CUDA enabled") AssertionError: Torch not compiled with CUDA enabled
Prompt executed in 0.05 seconds got prompt ERROR:root:!!! Exception during processing !!! ERROR:root:Traceback (most recent call last): File "C:\Users\WADEHRA\conda_cm\ComfyUI\execution.py", line 152, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "C:\Users\WADEHRA\conda_cm\ComfyUI\execution.py", line 82, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "C:\Users\WADEHRA\conda_cm\ComfyUI\execution.py", line 75, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "C:\Users\WADEHRA\conda_cm\ComfyUI\nodes.py", line 552, in load_checkpoint out = comfy.sd.load_checkpoint_guess_config(ckpt_path, output_vae=True, output_clip=True, embedding_directory=folder_paths.get_folder_paths("embeddings")) File "C:\Users\WADEHRA\conda_cm\ComfyUI\comfy\sd.py", line 459, in load_checkpoint_guess_config unet_dtype = model_management.unet_dtype(model_params=parameters, supported_dtypes=model_config.supported_inference_dtypes) File "C:\Users\WADEHRA\conda_cm\ComfyUI\comfy\model_management.py", line 502, in unet_dtype if should_use_bf16(device): File "C:\Users\WADEHRA\conda_cm\ComfyUI\comfy\model_management.py", line 781, in should_use_bf16 props = torch.cuda.get_device_properties(device) File "C:\Users\WADEHRA\anaconda3\envs\conda_cm\lib\site-packages\torch\cuda__init.py", line 395, in get_device_properties _lazy_init() # will define _get_device_properties File "C:\Users\WADEHRA\anaconda3\envs\conda_cm\lib\site-packages\torch\cuda\init__.py", line 239, in _lazy_init raise AssertionError("Torch not compiled with CUDA enabled") AssertionError: Torch not compiled with CUDA enabled
Prompt executed in 0.05 seconds"
but this solved when I rolled back to this commit in the main branch :- Stable Cascade Stage A.
I don't know why this is happening ? would be glad to have this solved
Thanks