Open xlibfly opened 2 months ago
Seems to be still working for me, fully updated. If you open the browser DevTools Console (Usually F12 key) do you see any related errors?
对我来说似乎仍然有效,已完全更新。如果您打开浏览器 DevTools 控制台(通常为 F12 键),是否会看到任何相关错误?
Seems to be still working for me, fully updated. If you open the browser DevTools Console (Usually F12 key) do you see any related errors?
Starting server
To see the GUI go to: http://127.0.0.1:8188
Prestartup times for custom nodes: 0.0 seconds: F:\ComfyUI-aki-v1.1\custom_nodes\rgthree-comfy
Total VRAM 16383 MB, total RAM 65470 MB pytorch version: 2.3.1+cu121 xformers version: 0.0.27 Set vram state to: NORMAL_VRAM Device: cuda:0 NVIDIA GeForce RTX 3070 : cudaMallocAsync Using xformers cross attention [Prompt Server] web root: F:\ComfyUI-aki-v1.1\web Adding extra search path checkpoints F:\sd-webui-aki-v4.5\models/Stable-diffusion Adding extra search path configs F:\sd-webui-aki-v4.5\models/Stable-diffusion Adding extra search path vae F:\sd-webui-aki-v4.5\models/VAE Adding extra search path loras F:\sd-webui-aki-v4.5\models/Lora Adding extra search path loras F:\sd-webui-aki-v4.5\models/LyCORIS Adding extra search path upscale_models F:\sd-webui-aki-v4.5\models/ESRGAN Adding extra search path upscale_models F:\sd-webui-aki-v4.5\models/RealESRGAN Adding extra search path upscale_models F:\sd-webui-aki-v4.5\models/SwinIR Adding extra search path embeddings F:\sd-webui-aki-v4.5\embeddings Adding extra search path hypernetworks F:\sd-webui-aki-v4.5\models/hypernetworks Adding extra search path controlnet F:\sd-webui-aki-v4.5\models/ControlNet/models
[rgthree] Loaded 42 epic nodes. [rgthree] NOTE: Will NOT use rgthree's optimized recursive execution as ComfyUI has changed.
Import times for custom nodes: 0.0 seconds: F:\ComfyUI-aki-v1.1\custom_nodes\websocket_image_save.py 0.0 seconds: F:\ComfyUI-aki-v1.1\custom_nodes\sdxl_utility.py 0.0 seconds: F:\ComfyUI-aki-v1.1\custom_nodes\rgthree-comfy
Starting server
To see the GUI go to: http://127.0.0.1:8188 got prompt model weight dtype torch.float16, manual cast: None model_type EPS Using xformers attention in VAE Using xformers attention in VAE Requested to load SD1ClipModel Loading 1 new model loaded completely 0.0 235.84423828125 True F:\ComfyUI-aki-v1.1\comfy\ldm\modules\attention.py:407: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at ..\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:455.) out = torch.nn.functional.scaled_dot_product_attention(q, k, v, attn_mask=mask, dropout_p=0.0, is_causal=False) Requested to load AutoencoderKL Loading 1 new model loaded completely 0.0 159.55708122253418 True Requested to load BaseModel Loading 1 new model loaded completely 0.0 1639.406135559082 True
Hmm.. I don't see anything too crazy. That app_mixlab
file may be conflicting. Try removing the mixlab extension and see if that fixes it. If it does, then it could be a bug with mixlab.