Open Lalimec opened 5 days ago
First, try updating PyTorch to the latest version.
I actually did after reporting, its 2.5.something now, what changed is instead of error i get an infinite inference time lol. It just waits after the loading part is done. The generation below was stuck at vae decoding. Another flux inference wasnt even able to pass the model loading and got stuck at clip text encode node.
.
.
.
got prompt
Using pytorch attention in VAE
Using pytorch attention in VAE
/home/ubuntu/cemil-test/ComfyUI/venv/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:1601: FutureWarning: `clean_up_tokenization_spaces` was not set. It will be set to `True` by default. This behavior will be depracted in transformers v4.45, and will be then set to `False` by default. For more details check this issue: https://github.com/huggingface/transformers/issues/31884
warnings.warn(
clip missing: ['text_projection.weight']
Requested to load FluxClipModel_
Loading 1 new model
loaded completely 0.0 9319.23095703125 True
model weight dtype torch.bfloat16, manual cast: None
model_type FLUX
Requested to load Flux
Loading 1 new model
loaded completely 0.0 22700.097778320312 True
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:01<00:00, 2.00it/s]
Requested to load AutoencodingEngine
Loading 1 new model
loaded completely 0.0 159.87335777282715 True
Expected Behavior
I shouldnt be getting CUDA error.
Actual Behavior
I am not able to use comfyui after the last update, it was working fine yesterday.
Steps to Reproduce
It is not about specific wfs but here it is. Flux.json
Debug Logs
Other
No response